aboutsummaryrefslogtreecommitdiff
path: root/open_issues/performance
diff options
context:
space:
mode:
Diffstat (limited to 'open_issues/performance')
-rw-r--r--open_issues/performance/degradation.mdwn52
-rw-r--r--open_issues/performance/fork.mdwn37
-rw-r--r--open_issues/performance/io_system/binutils_ld_64ksec.mdwn39
-rw-r--r--open_issues/performance/io_system/binutils_ld_64ksec/test.tar.xzbin0 -> 378092 bytes
-rw-r--r--open_issues/performance/io_system/clustered_page_faults.mdwn162
-rw-r--r--open_issues/performance/io_system/read-ahead.mdwn1567
-rw-r--r--open_issues/performance/ipc_virtual_copy.mdwn395
-rw-r--r--open_issues/performance/microbenchmarks.mdwn13
-rw-r--r--open_issues/performance/microkernel_multi-server.mdwn47
9 files changed, 2312 insertions, 0 deletions
diff --git a/open_issues/performance/degradation.mdwn b/open_issues/performance/degradation.mdwn
new file mode 100644
index 00000000..1aaae4d2
--- /dev/null
+++ b/open_issues/performance/degradation.mdwn
@@ -0,0 +1,52 @@
+[[!meta copyright="Copyright © 2011, 2012 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+[[!meta title="Degradation of GNU/Hurd ``system performance''"]]
+
+[[!tag open_issue_gnumach open_issue_hurd]]
+
+[[!toc]]
+
+
+# Email, [[!message-id "87mxg2ahh8.fsf@kepler.schwinge.homeip.net"]] (bug-hurd, 2011-07-25, Thomas Schwinge)
+
+> Building a certain GCC configuration on a freshly booted system: 11 h.
+> Remove build tree, build it again (2nd): 12 h 50 min. Huh. Remove build
+> tree, reboot, build it again (1st): back to 11 h. Remove build tree, build
+> it again (2nd): 12 h 40 min. Remove build tree, build it again (3rd): 15 h.
+
+IRC, freenode, #hurd, 2011-07-23:
+
+ < antrik> tschwinge: yes, the system definitely gets slower with
+ time. after running for a couple of weeks, it needs at least twice as
+ long to open a new shell for example
+ < antrik> I don't know whether this is only related to swap usage, or there
+ are some serious fragmentation issues
+ < braunr> antrik: both could be induced by fragmentation
+
+
+# During [[IPC_virtual_copy]] testing
+
+IRC, freenode, #hurd, 2011-09-02:
+
+ <manuel> interestingly, running it several times has made the performance
+ drop quite much (i'm getting 400-500MB/s with 1M now, compared to nearly
+ 800 fifteen minutes ago)
+ <braunr> manuel: i observed the same behaviour
+ [...]
+
+
+# IRC, freenode, #hurd, 2011-09-22
+
+See [[/open_issues/resource_management_problems/pagers]], IRC, freenode, #hurd,
+2011-09-22.
+
+
+# [[ext2fs_page_cache_swapping_leak]]
diff --git a/open_issues/performance/fork.mdwn b/open_issues/performance/fork.mdwn
new file mode 100644
index 00000000..5ceb6455
--- /dev/null
+++ b/open_issues/performance/fork.mdwn
@@ -0,0 +1,37 @@
+[[!meta copyright="Copyright © 2010, 2011 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+[[!tag open_issue_glibc open_issue_hurd]]
+
+Our [[`fork` implementation|glibc/fork]] is nontrivial.
+
+To do: hard numbers.
+[[Microbenchmarks]]?
+
+
+# Windows / Cygwin
+
+ * <http://www.google.com/search?q=cygwin+fork>
+
+ * <http://www.redhat.com/support/wpapers/cygnus/cygnus_cygwin/architecture.html>
+
+ In particular, *5.6. Process Creation*.
+
+ * <http://archive.gamedev.net/community/forums/topic.asp?topic_id=360290>
+
+ * <http://cygwin.com/cgi-bin/cvsweb.cgi/src/winsup/cygwin/how-cygheap-works.txt?cvsroot=src>
+
+ > Cygwin has recently adopted something called the "cygwin heap". This is
+ > an internal heap that is inherited by forked/execed children. It
+ > consists of process specific information that should be inherited. So
+ > things like the file descriptor table, the current working directory, and
+ > the chroot value live there.
+
+ * <http://www.perlmonks.org/?node_id=588994>
diff --git a/open_issues/performance/io_system/binutils_ld_64ksec.mdwn b/open_issues/performance/io_system/binutils_ld_64ksec.mdwn
new file mode 100644
index 00000000..931fd0ee
--- /dev/null
+++ b/open_issues/performance/io_system/binutils_ld_64ksec.mdwn
@@ -0,0 +1,39 @@
+[[!meta copyright="Copyright © 2010, 2011 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+[[!tag open_issue_hurd]]
+
+This one may be considered as a testcase for [[I/O system
+optimization|community/gsoc/project_ideas/disk_io_performance]].
+
+It is taken from the [[binutils testsuite|binutils]],
+`ld/ld-elf/sec64k.exp`, where this
+test may occasionally [[trigger a timeout|binutils#64ksec]]. It is
+extracted from cdf7c161ebd4a934c9e705d33f5247fd52975612 sources, 2010-10-24.
+
+ $ wget -O - http://www.gnu.org/software/hurd/open_issues/performance/io_system/binutils_ld_64ksec/test.tar.xz | xz -d | tar -x
+ $ cd test/
+ $ \time ./ld-new.stripped -o dump dump?.o dump??.o
+ 0.00user 0.00system 2:46.11elapsed 0%CPU (0avgtext+0avgdata 0maxresident)k
+ 0inputs+0outputs (0major+0minor)pagefaults 0swaps
+
+On the idle grubber, this one repeatedly takes a few minutes wall time to
+complete successfully, contrary to a few seconds on a GNU/Linux system.
+
+While processing the object files, there is heavy interaction with the relevant
+[[hurd/translator/ext2fs]] process. Running [[hurd/debugging/rpctrace]] on
+the testee shows that (primarily) an ever-repeating series of `io_seek` and
+`io_read` is being processed. Running the testee on GNU/Linux with strace
+shows the equivalent thing (`_llseek`, `read`) -- but Linux' I/O system isn't
+as slow as the Hurd's.
+
+As Samuel figured out later, this slowness may in fact be due to a Xen-specific
+issue, see [[Xen_lseek]]. After the latter has been addressed, we can
+re-evaluate this issue here.
diff --git a/open_issues/performance/io_system/binutils_ld_64ksec/test.tar.xz b/open_issues/performance/io_system/binutils_ld_64ksec/test.tar.xz
new file mode 100644
index 00000000..6d7c606c
--- /dev/null
+++ b/open_issues/performance/io_system/binutils_ld_64ksec/test.tar.xz
Binary files differ
diff --git a/open_issues/performance/io_system/clustered_page_faults.mdwn b/open_issues/performance/io_system/clustered_page_faults.mdwn
new file mode 100644
index 00000000..a3baf30d
--- /dev/null
+++ b/open_issues/performance/io_system/clustered_page_faults.mdwn
@@ -0,0 +1,162 @@
+[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+[[!tag open_issue_gnumach open_issue_hurd]]
+
+[[community/gsoc/project_ideas/disk_io_performance]].
+
+[[!toc]]
+
+
+# IRC, freenode, #hurd, 2011-02-16
+
+ <braunr> exceptfor the kernel, everything in an address space is
+ represented with a VM object
+ <braunr> those objects can represent anonymous memory (from malloc() or
+ because of a copy-on-write)
+ <braunr> or files
+ <braunr> on classic Unix systems, these are files
+ <braunr> on the Hurd, these are memory objects, backed by external pagers
+ (like ext2fs)
+ <braunr> so when you read a file
+ <braunr> the kernel maps it from ext2fs in your address space
+ <braunr> and when you access the memory, a fault occurs
+ <braunr> the kernel determines it's a region backed by ext2fs
+ <braunr> so it asks ext2fs to provide the data
+ <braunr> when the fault is resolved, your process goes on
+ <etenil> does the faul occur because Mach doesn't know how to access the
+ memory?
+ <braunr> it occurs because Mach intentionnaly didn't back the region with
+ physical memory
+ <braunr> the MMU is programmed not to know what is present in the memory
+ region
+ <braunr> or because it's read only
+ <braunr> (which is the case for COW faults)
+ <etenil> so that means this bit of memory is a buffer that ext2fs loads the
+ file into and then it is remapped to the application that asked for it
+ <braunr> more or less, yes
+ <braunr> ideally, it's directly written into the right pages
+ <braunr> there is no intermediate buffer
+ <etenil> I see
+ <etenil> and as you told me before, currently the page faults are handled
+ one at a time
+ <etenil> which wastes a lot of time
+ <braunr> a certain amount of time
+ <etenil> enough to bother the user :)
+ <etenil> I've seen pages have a fixed size
+ <braunr> yes
+ <braunr> use the PAGE_SIZE macro
+ <etenil> and when allocating memory, the size that's asked for is rounded
+ up to the page size
+ <etenil> so if I have this correctly, it means that a file ext2fs provides
+ could be split into a lot of pages
+ <braunr> yes
+ <braunr> once in memory, it is managed by the page cache
+ <braunr> so that pages more actively used are kept longer than others
+ <braunr> in order to minimize I/O
+ <etenil> ok
+ <braunr> so a better page cache code would also improve overall performance
+ <braunr> and more RAM would help a lot, since we are strongly limited by
+ the 768 MiB limit
+ <braunr> which reduces the page cache size a lot
+ <etenil> but the problem is that reading a whole file in means trigerring
+ many page faults just for one file
+ <braunr> if you want to stick to the page clustering thing, yes
+ <braunr> you want less page faults, so that there are less IPC between the
+ kernel and the pager
+ <etenil> so either I make pages bigger
+ <etenil> or I modify Mach so it can check up on a range of pages for faults
+ before actually processing
+ <braunr> you *don't* change the page size
+ <etenil> ah
+ <etenil> that's hardware isn't it?
+ <braunr> in Mach, yes
+ <etenil> ok
+ <braunr> and usually, you want the page size to be the CPU page size
+ <etenil> I see
+ <braunr> current CPU can support multiple page sizes, but it becomes quite
+ hard to correctly handle
+ <braunr> and bigger page sizes mean more fragmentation, so it only suits
+ machines with large amounts of RAM, which isn't the case for us
+ <etenil> ok
+ <etenil> so I'll try the second approach then
+ <braunr> that's what i'd recommand
+ <braunr> recommend*
+ <etenil> ok
+
+
+# IRC, freenode, #hurd, 2011-02-16
+
+ <antrik> etenil: OSF Mach does have clustered paging BTW; so that's one
+ place to start looking...
+ <antrik> (KAM ported the OSF code to gnumach IIRC)
+ <antrik> there is also an existing patch for clustered paging in libpager,
+ which needs some adaptation
+ <antrik> the biggest part of the task is probably modifying the Hurd
+ servers to use the new interface
+ <antrik> but as I said, KAM's code should be available through google, and
+ can serve as a starting point
+
+<http://lists.gnu.org/archive/html/bug-hurd/2010-06/msg00023.html>
+
+
+# IRC, freenode, #hurd, 2011-07-22
+
+ <braunr> but concerning clustered pagins/outs, i'm not sure it's a mach
+ interface limitation
+ <braunr> the external memory pager interface does allow multiple pages to
+ be transfered
+ <braunr> isn't it an internal Mach VM problem ?
+ <braunr> isn't it simply the page fault handler ?
+ <antrik> braunr: are you sure? I was under the impression that changing the
+ pager interface was among the requirements...
+ <antrik> hm... I wonder whether for pageins, it could actually be handled
+ in the pages instead of Mach... though this wouldn't work for pageouts,
+ so probably not very helpful
+ <antrik> err... in the pagers
+ <braunr> antrik: i'm almost sure
+ <braunr> but i've be proven wrong many times, so ..
+ <braunr> there are two main facts that lead me to think this
+ <braunr> 1/
+ http://www.gnu.org/software/hurd/gnumach-doc/Memory-Objects-and-Data.html#Memory-Objects-and-Data
+ says lengths are provided and doesn't mention the limitation
+ <braunr> 2/ when reading about UVM, one of the major improvements (between
+ 10 and 30% of global performance depending on the benchmarks) was
+ implementing the madvise semantics
+ <braunr> and this didn't involve a new pager interface, but rather a new
+ page fault handler
+ <antrik> braunr: hm... the interface indeed looks like it can handle
+ multiple pages in both directions... perhaps it was at the Hurd level
+ where the pager interface needs to be modified, not the Mach one?...
+ <braunr> antrik: would be nice wouldn't it ? :)
+ <braunr> antrik: more probably the page fault handler
+
+
+# IRC, freenode, #hurd, 2011-09-28
+
+ <slpz> antrik: I've just recovered part of my old multipage I/O work
+ <slpz> antrik: I intend to clean and submit it after finishing the changes
+ to the pageout system.
+ <antrik> slpz: oh, great!
+ <antrik> didn't know you worked on multipage I/O
+ <antrik> slpz: BTW, have you checked whether any of the work done for GSoC
+ last year is any good?...
+ <antrik> (apart from missing copyright assignments, which would be a
+ serious problem for the Hurd parts...)
+ <slpz> antrik: It was seven years ago, but I did:
+ http://www.mail-archive.com/bug-hurd@gnu.org/msg10285.html :-)
+ <slpz> antrik: Sincerely, I don't think the quality of that code is good
+ enough to be considered... but I think it was my fault as his mentor for
+ not correcting him soon enough...
+ <antrik> slpz: I see
+ <antrik> TBH, I feel guilty myself, for not asking about the situation
+ immediately when he stopped attending meetings...
+ <antrik> slpz: oh, you even already looked into vm_pageout_scan() back then
+ :-)
diff --git a/open_issues/performance/io_system/read-ahead.mdwn b/open_issues/performance/io_system/read-ahead.mdwn
new file mode 100644
index 00000000..710c746b
--- /dev/null
+++ b/open_issues/performance/io_system/read-ahead.mdwn
@@ -0,0 +1,1567 @@
+[[!meta copyright="Copyright © 2011, 2012 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+[[!tag open_issue_gnumach open_issue_hurd]]
+
+[[!toc]]
+
+
+# [[community/gsoc/project_ideas/disk_io_performance]]
+
+
+# [[gnumach_page_cache_policy]]
+
+
+# 2011-02
+
+[[Etenil]] has been working in this area.
+
+
+## IRC, freenode, #hurd, 2011-02-13
+
+ <etenil> youpi: Would libdiskfs/diskfs.h be in the right place to make
+ readahead functions?
+ <youpi> etenil: no, it'd rather be at the memory management layer,
+ i.e. mach, unfortunately
+ <youpi> because that's where you see the page faults
+ <etenil> youpi: Linux also provides a readahead() function for higher level
+ applications. I'll probably have to add the same thing in a place that's
+ higher level than mach
+ <youpi> well, that should just be hooked to the same common implementation
+ <etenil> the man page for readahead() also states that portable
+ applications should avoid it, but it could be benefic to have it for
+ portability
+ <youpi> it's not in posix indeed
+
+
+## IRC, freenode, #hurd, 2011-02-14
+
+ <etenil> youpi: I've investigated prefetching (readahead) techniques. One
+ called DiskSeen seems really efficient. I can't tell yet if it's patented
+ etc. but I'll keep you informed
+ <youpi> don't bother with complicated techniques, even the most simple ones
+ will be plenty :)
+ <etenil> it's not complicated really
+ <youpi> the matter is more about how to plug it into mach
+ <etenil> ok
+ <youpi> then don't bother with potential pattents
+ <antrik> etenil: please take a look at the work KAM did for last year's
+ GSoC
+ <youpi> just use a trivial technique :)
+ <etenil> ok, i'll just go the easy way then
+
+ <braunr> antrik: what was etenil referring to when talking about
+ prefetching ?
+ <braunr> oh, madvise() stuff
+ <braunr> i could help him with that
+
+
+## IRC, freenode, #hurd, 2011-02-15
+
+ <etenil> oh, I'm looking into prefetching/readahead to improve I/O
+ performance
+ <braunr> etenil: ok
+ <braunr> etenil: that's actually a VM improvement, like samuel told you
+ <etenil> yes
+ <braunr> a true I/O improvement would be I/O scheduling
+ <braunr> and how to implement it in a hurdish way
+ <braunr> (or if it makes sense to have it in the kernel)
+ <etenil> that's what I've been wondering too lately
+ <braunr> concerning the VM, you should look at madvise()
+ <etenil> my understanding is that Mach considers devices without really
+ knowing what they are
+ <braunr> that's roughly the interface used both at the syscall() and the
+ kernel levels in BSD, which made it in many other unix systems
+ <etenil> whereas I/O optimisations are often hard disk drives specific
+ <braunr> that's true for almost any kernel
+ <braunr> the device knowledge is at the driver level
+ <etenil> yes
+ <braunr> (here, I separate kernels from their drivers ofc)
+ <etenil> but Mach also contains some drivers, so I'm going through the code
+ to find the apropriate place for these improvements
+ <braunr> you shouldn't tough the drivers at all
+ <braunr> touch
+ <etenil> true, but I need to understand how it works before fiddling around
+ <braunr> hm
+ <braunr> not at all
+ <braunr> the VM improvement is about pagein clustering
+ <braunr> you don't need to know how pages are fetched
+ <braunr> well, not at the device level
+ <braunr> you need to know about the protocol between the kernel and
+ external pagers
+ <etenil> ok
+ <braunr> you could also implement pageout clustering
+ <etenil> if I understand you well, you say that what I'd need to do is a
+ queuing system for the paging in the VM?
+ <braunr> no
+ <braunr> i'm saying that, when a page fault occurs, the kernel should
+ (depending on what was configured through madvise()) transfer pages in
+ multiple blocks rather than one at a time
+ <braunr> communication with external pagers is already async, made through
+ regular ports
+ <braunr> which already implement message queuing
+ <braunr> you would just need to make the mapped regions larger
+ <braunr> and maybe change the interface so that this size is passed
+ <etenil> mmh
+ <braunr> (also don't forget that page clustering can include pages *before*
+ the page which caused the fault, so you may have to pass the start of
+ that region too)
+ <etenil> I'm not sure I understand the page fault thing
+ <etenil> is it like a segmentation error?
+ <etenil> I can't find a clear definition in Mach's manual
+ <braunr> ah
+ <braunr> it's a fundamental operating system concept
+ <braunr> http://en.wikipedia.org/wiki/Page_fault
+ <etenil> ah ok
+ <etenil> I understand now
+ <etenil> so what's currently happening is that when a page fault occurs,
+ Mach is transfering pages one at a time and wastes time
+ <braunr> sometimes, transferring just one page is what you want
+ <braunr> it depends on the application, which is why there is madvise()
+ <braunr> our rootfs, on the other hand, would benefit much from such an
+ improvement
+ <braunr> in UVM, this optimization is account for around 10% global
+ performance improvement
+ <braunr> accounted*
+ <etenil> not bad
+ <braunr> well, with an improved page cache, I'm sure I/O would matter less
+ on systems with more RAM
+ <braunr> (and another improvement would make mach support more RAM in the
+ first place !)
+ <braunr> an I/O scheduler outside the kernel would be a very good project
+ IMO
+ <braunr> in e.g. libstore/storeio
+ <etenil> yes
+ <braunr> but as i stated in my thesis, a resource scheduler should be as
+ close to its resource as it can
+ <braunr> and since mach can host several operating systems, I/O schedulers
+ should reside near device drivers
+ <braunr> and since current drivers are in the kernel, it makes sens to have
+ it in the kernel too
+ <braunr> so there must be some discussion about this
+ <etenil> doesn't this mean that we'll have to get some optimizations in
+ Mach and have the same outside of Mach for translators that access the
+ hardware directly?
+ <braunr> etenil: why ?
+ <etenil> well as you said Mach contains some drivers, but in principle, it
+ shouldn't, translators should do disk access etc, yes?
+ <braunr> etenil: ok
+ <braunr> etenil: so ?
+ <etenil> well, let's say if one were to introduce SATA support in Hurd,
+ nothing would stop him/her to do so with a translator rather than in Mach
+ <braunr> you should avoid the term translator here
+ <braunr> it's really hurd specific
+ <braunr> let's just say a user space task would be responsible for that
+ job, maybe multiple instances of it, yes
+ <etenil> ok, so in this case, let's say we have some I/O optimization
+ techniques like readahead and I/O scheduling within Mach, would these
+ also apply to the user-space task, or would they need to be
+ reimplemented?
+ <braunr> if you have user space drivers, there is no point having I/O
+ scheduling in the kernel
+ <etenil> but we also have drivers within the kernel
+ <braunr> what you call readahead, and I call pagein/out clustering, is
+ really tied to the VM, so it must be in Mach in any case
+ <braunr> well
+ <braunr> you either have one or the other
+ <braunr> currently we have them in the kernel
+ <braunr> if we switch to DDE, we should have all of them outside
+ <braunr> that's why such things must be discussed
+ <etenil> ok so if I follow you, then future I/O device drivers will need to
+ be implemented for Mach
+ <braunr> currently, yes
+ <braunr> but preferrably, someone should continue the work that has been
+ done on DDe so that drivers are outside the kernel
+ <etenil> so for the time being, I will try and improve I/O in Mach, and if
+ drivers ever get out, then some of the I/O optimizations will need to be
+ moved out of Mach
+ <braunr> let me remind you one of the things i said
+ <braunr> i said I/O scheduling should be close to their resource, because
+ we can host several operating systems
+ <braunr> now, the Hurd is the only system running on top of Mach
+ <braunr> so we could just have I/O scheduling outside too
+ <braunr> then you should consider neighbor hurds
+ <braunr> which can use different partitions, but on the same device
+ <braunr> currently, partitions are managed in the kernel, so file systems
+ (and storeio) can't make good scheduling decisions if it remains that way
+ <braunr> but that can change too
+ <braunr> a single storeio representing a whole disk could be shared by
+ several hurd instances, just as if it were a high level driver
+ <braunr> then you could implement I/O scheduling in storeio, which would be
+ an improvement for the current implementation, and reusable for future
+ work
+ <etenil> yes, that was my first instinct
+ <braunr> and you would be mostly free of the kernel internals that make it
+ a nightmare
+ <etenil> but youpi said that it would be better to modify Mach instead
+ <braunr> he mentioned the page clustering thing
+ <braunr> not I/O scheduling
+ <braunr> theseare really two different things
+ <etenil> ok
+ <braunr> you *can't* implement page clustering outside Mach because Mach
+ implements virtual memory
+ <braunr> both policies and mechanisms
+ <etenil> well, I'd rather think of one thing at a time if that's alright
+ <etenil> so what I'm busy with right now is setting up clustered page-in
+ <etenil> which need to be done within Mach
+ <braunr> keep clustered page-outs in mind too
+ <braunr> although there are more constraints on those
+ <etenil> yes
+ <etenil> I've looked up madvise(). There's a lot of documentation about it
+ in Linux but I couldn't find references to it in Mach (nor Hurd), does it
+ exist?
+ <braunr> well, if it did, you wouldn't be caring about clustered page
+ transfers, would you ?
+ <braunr> be careful about linux specific stuff
+ <etenil> I suppose not
+ <braunr> you should implement at least posix options, and if there are
+ more, consider the bsd variants
+ <braunr> (the Mach VM is the ancestor of all modern BSD VMs)
+ <etenil> madvise() seems to be posix
+ <braunr> there are system specific extensions
+ <braunr> be careful
+ <braunr> CONFORMING TO POSIX.1b. POSIX.1-2001 describes posix_madvise(3)
+ with constants POSIX_MADV_NORMAL, etc., with a behav‐ ior close to that
+ described here. There is a similar posix_fadvise(2) for file access.
+ <braunr> MADV_REMOVE, MADV_DONTFORK, MADV_DOFORK, MADV_HWPOISON,
+ MADV_MERGEABLE, and MADV_UNMERGEABLE are Linux- specific.
+ <etenil> I was about to post these
+ <etenil> ok, so basically madvise() allows tasks etc. to specify a usage
+ type for a chunk of memory, then I could apply the relevant I/O
+ optimization based on this
+ <braunr> that's it
+ <etenil> cool, then I don't need to worry about knowing what the I/O is
+ operating on, I just need to apply the optimizations as advised
+ <etenil> that's convenient
+ <etenil> ok I'll start working on this tonight
+ <etenil> making a basic readahead shouldn't be too hard
+ <braunr> readahead is a misleading name
+ <etenil> is pagein better?
+ <braunr> applies to too many things, doesn't include the case where
+ previous elements could be prefetched
+ <braunr> clustered page transfers is what i would use
+ <braunr> page prefetching maybe
+ <etenil> ok
+ <braunr> you should stick to something that's already used in the
+ literature since you're not inventing something new
+ <etenil> yes I've read a paper about prefetching
+ <etenil> ok
+ <etenil> thanks for your help braunr
+ <braunr> sure
+ <braunr> you're welcome
+ <antrik> braunr: madvise() is really the least important part of the
+ picture...
+ <antrik> very few applications actually use it. but pretty much all
+ applications will profit from clustered paging
+ <antrik> I would consider madvise() an optional goody, not an integral part
+ of the implementation
+ <antrik> etenil: you can find some stuff about KAM's work on
+ http://www.gnu.org/software/hurd/user/kam.html
+ <antrik> not much specific though
+ <etenil> thanks
+ <antrik> I don't remember exactly, but I guess there is also some
+ information on the mailing list. check the archives for last summer
+ <antrik> look for Karim Allah Ahmed
+ <etenil> antrik: I disagree, madvise gives me a good starting point, even
+ if eventually the optimisations should run even without it
+ <antrik> the code he wrote should be available from Google's summer of code
+ page somewhere...
+ <braunr> antrik: right, i was mentioning madvise() because the kernel (VM)
+ interface is pretty similar to the syscall
+ <braunr> but even a default policy would be nice
+ <antrik> etenil: I fear that many bits were discussed only on IRC... so
+ you'd better look through the IRC logs from last April onwards...
+ <etenil> ok
+
+ <etenil> at the beginning I thought I could put that into libstore
+ <etenil> which would have been fine
+
+ <antrik> BTW, I remembered now that KAM's GSoC application should have a
+ pretty good description of the necessary changes... unfortunately, these
+ are not publicly visible IIRC :-(
+
+
+## IRC, freenode, #hurd, 2011-02-16
+
+ <etenil> braunr: I've looked in the kernel to see where prefetching would
+ fit best. We talked of the VM yesterday, but I'm not sure about it. It
+ seems to me that the device part of the kernel makes more sense since
+ it's logically what manages devices, am I wrong?
+ <braunr> etenil: you are
+ <braunr> etenil: well
+ <braunr> etenil: drivers should already support clustered sector
+ read/writes
+ <etenil> ah
+ <braunr> but yes, there must be support in the drivers too
+ <braunr> what would really benefit the Hurd mostly concerns page faults, so
+ the right place is the VM subsystem
+
+[[clustered_page_faults]]
+
+
+# 2012-03
+
+
+## IRC, freenode, #hurd, 2012-03-21
+
+ <mcsim> I thought that readahead should have some heuristics, like
+ accounting size of object and last access time, but i didn't find any in
+ kam's patch. Are heuristics needed or it will be overhead for
+ microkernel?
+ <youpi> size of object and last access time are not necessarily useful to
+ take into account
+ <youpi> what would usually typically be kept is the amount of contiguous
+ data that has been read lately
+ <youpi> to know whether it's random or sequential, and how much is read
+ <youpi> (the whole size of the object does not necessarily give any
+ indication of how much of it will be read)
+ <mcsim> if big object is accessed often, performance could be increased if
+ frame that will be read ahead will be increased too.
+ <youpi> yes, but the size of the object really does not matter
+ <youpi> you can just observe how much data is read and realize that it's
+ read a lot
+ <youpi> all the more so with userland fs translators
+ <youpi> it's not because you mount a CD image that you need to read it all
+ <mcsim> youpi: indeed. this will be better. But on other hand there is
+ principle about policy and mechanism. And kernel should implement
+ mechanism, but heuristics seems to be policy. Or in this case moving
+ readahead policy to user level would be overhead?
+ <antrik> mcsim: paging policy is all in kernel anyways; so it makes perfect
+ sense to put the readahead policy there as well
+ <antrik> (of course it can be argued -- probably rightly -- that all of
+ this should go into userspace instead...)
+ <mcsim> antrik: probably defpager partly could do that. AFAIR, it is
+ possible for defpager to return more memory than was asked.
+ <mcsim> antrik: I want to outline what should be done during gsoc. First,
+ kernel should support simple readahead for specified number of pages
+ (regarding direction of access) + simple heuristic for changing frame
+ size. Also default pager could make some analysis, for instance if it has
+ many data located consequentially it could return more data then was
+ asked. For other pagers I won't do anything. Is it suitable?
+ <antrik> mcsim: I think we actually had the same discussion already with
+ KAM ;-)
+ <antrik> for clustered pageout, the kernel *has* to make the decision. I'm
+ really not convinced it makes sense to leave the decision for clustered
+ pagein to the individual pagers
+ <antrik> especially as this will actually complicate matters because a) it
+ will require work in *every* pager, and b) it will probably make handling
+ of MADVISE & friends more complex
+ <antrik> implementing readahead only for the default pager would actually
+ be rather unrewarding. I'm pretty sure it's the one giving the *least*
+ benefit
+ <antrik> it's much, much more important for ext2
+ <youpi> mcsim: maybe try to dig in the irc logs, we discussed about it with
+ neal. the current natural place would be the kernel, because it's the
+ piece that gets the traps and thus knows what happens with each
+ projection, while the backend just provides the pages without knowing
+ which projection wants it. Moving to userland would not only be overhead,
+ but quite difficult
+ <mcsim> antrik: OK, but I'm not sure that I could do it for ext2.
+ <mcsim> OK, I'll dig.
+
+
+## IRC, freenode, #hurd, 2012-04-01
+
+ <mcsim> as part of implementing of readahead project I have to add
+ interface for setting appropriate behaviour for memory range. This
+ interface than should be compatible with madvise call, that has a lot of
+ possible advises, but most part of them are specific for Linux (according
+ to man page). Should mach also support these Linux-specific values?
+ <mcsim> p.s. these Linux-specific values shouldn't affect readahead
+ algorithm.
+ <youpi> the interface shouldn't prevent from adding them some day
+ <youpi> so that we don't have to add them yet
+ <mcsim> ok. And what behaviour with value MADV_NORMAL should be look like?
+ Seems that it should be synonym to MADV_SEQUENTIAL, isn't it?
+ <youpi> no, it just means "no idea what it is"
+ <youpi> in the linux implementation, that means some given readahead value
+ <youpi> while SEQUENTIAL means twice as much
+ <youpi> and RANDOM means zero
+ <mcsim> youpi: thank you.
+ <mcsim> youpi: Than, it seems to be better that kernel interface for
+ setting behaviour will accept readahead value, without hiding it behind
+ such constants, like VM_BEHAVIOR_DEFAULT (like it was in kam's
+ patch). And than implementation of madvise will call vm_behaviour_set
+ with appropriate frame size. Is that right?
+ <youpi> question of taste, better ask on the list
+ <mcsim> ok
+
+
+## IRC, freenode, #hurd, 2012-06-09
+
+ <mcsim> hello. What fictitious pages in gnumach are needed for?
+ <mcsim> I mean why real page couldn't be grabbed straight, but in sometimes
+ fictitious page is grabbed first and than converted to real?
+ <braunr> mcsim: iirc, fictitious pages are needed by device pagers which
+ must comply with the vm pager interface
+ <braunr> mcsim: specifically, they must return a vm_page structure, but
+ this vm_page describes device memory
+ <braunr> mcsim: and then, it must not be treated like normal vm_page, which
+ can be added to page queues (e.g. page cache)
+
+
+## IRC, freenode, #hurd, 2012-06-22
+
+ <mcsim> braunr: Ah. Patch for large storages introduced new callback
+ pager_notify_evict. User had to define this callback on his own as
+ pager_dropweak, for instance. But neal's patch change this. Now all
+ callbacks could have any name, but user defines structure with pager ops
+ and supplies it in pager_create.
+ <mcsim> So, I just changed notify_evict to confirm it to new style.
+ <mcsim> braunr: I want to changed interface of mo_change_attributes and
+ test my changes with real partitions. For both these I have to update
+ ext2fs translator, but both partitions I have are bigger than 2Gb, that's
+ why I need apply this patch.z
+ <mcsim> But what to do with mo_change_attributes? I need somehow inform
+ kernel about page fault policy.
+ <mcsim> When I change mo_ interface in kernel I have to update all programs
+ that use this interface and ext2fs is one of them.
+
+ <mcsim> braunr: Who do you think better to inform kernel about fault
+ policy? At the moment I've added fault_strategy parameter that accepts
+ following strategies: randow, sequential with single page cluster,
+ sequential with double page cluster and sequential with quad page
+ cluster. OSF/mach has completely another interface of
+ mo_change_attributes. In OSF/mach mo_change_attributes accepts structure
+ of parameter. This structure could have different formats depending o
+ <mcsim> This rpc could be useful because it is not very handy to update
+ mo_change_attributes for kernel, for hurd libs and for glibc. Instead of
+ this kernel will accept just one more structure format.
+ <braunr> well, like i wrote on the mailing list several weeks ago, i don't
+ think the policy selection is of concern currently
+ <braunr> you should focus on the implementation of page clustering and
+ readahead
+ <braunr> concerning the interface, i don't think it's very important
+ <braunr> also, i really don't like the fact that the policy is per object
+ <braunr> it should be per map entry
+ <braunr> i think it mentioned that in my mail too
+ <braunr> i really think you're wasting time on this
+ <braunr> http://lists.gnu.org/archive/html/bug-hurd/2012-04/msg00064.html
+ <braunr> http://lists.gnu.org/archive/html/bug-hurd/2012-04/msg00029.html
+ <braunr> mcsim: any reason you completely ignored those ?
+ <mcsim> braunr: Ok. I'll do clustering for map entries.
+ <braunr> no it's not about that either :/
+ <braunr> clustering is grouping several pages in the same transfer between
+ kernel and pager
+ <braunr> the *policy* is held in map entries
+ <antrik> mcsim: I'm not sure I properly understand your question about the
+ policy interface... but if I do, it's IMHO usually better to expose
+ individual parameters as RPC arguments explicitly, rather than hiding
+ them in an opaque structure...
+ <antrik> (there was quite some discussion about that with libburn guy)
+ <mcsim> antrik: Following will be ok? kern_return_t vm_advice(map, address,
+ length, advice, cluster_size)
+ <mcsim> Where advice will be either random or sequential
+ <antrik> looks fine to me... but then, I'm not an expert on this stuff :-)
+ <antrik> perhaps "policy" would be clearer than "advice"?
+ <mcsim> madvise has following prototype: int madvise(void *addr, size_t
+ len, int advice);
+ <mcsim> hmm... looks like I made a typo. Or advi_c_e is ok too?
+ <antrik> advise is a verb; advice a noun... there is a reason why both
+ forms show up in the madvise prototype :-)
+ <mcsim> so final variant should be kern_return_t vm_advise(map, address,
+ length, policy, cluster_size)?
+ <antrik> mcsim: nah, you are probably right that its better to keep
+ consistency with madvise, even if the name of the "advice" parameter
+ there might not be ideal...
+ <antrik> BTW, where does cluster_size come from? from the filesystem?
+ <antrik> I see merits both to naming the parameter "policy" (clearer) or
+ "advice" (more consistent) -- you decide :-)
+ <mcsim> antrik: also there is variant strategy, like with inheritance :)
+ I'll choose advice for now.
+ <mcsim> What do you mean under "where does cluster_size come from"?
+ <antrik> well, madvise doesn't have this parameter; so the value must come
+ from a different source?
+ <mcsim> in madvise implementation it could fixed value or somehow
+ calculated basing on size of memory range. In OSF/mach cluster size is
+ supplied too (via mo_change_attributes).
+ <antrik> ah, so you don't really know either :-)
+ <antrik> well, my guess is that it is derived from the cluster size used by
+ the filesystem in question
+ <antrik> so for us it would always be 4k for now
+ <antrik> (and thus you can probably leave it out alltogether...)
+ <antrik> well, fatfs can use larger clusters
+ <antrik> I would say, implement it only if it's very easy to do... if it's
+ extra effort, it's probably not worth it
+ <mcsim> There is sense to make cluster size bigger for ext2 too, since most
+ likely consecutive clusters will be within same group.
+ <mcsim> But anyway I'll handle this later.
+ <antrik> well, I don't know what cluster_size does exactly; but by the
+ sound of it, I'd guess it makes an assumption that it's *always* better
+ to read in this cluster size, even for random access -- which would be
+ simply wrong for 4k filesystem clusters...
+ <antrik> BTW, I agree with braunr that madvice() is optional -- it is way
+ way more important to get readahead working as a default policy first
+
+
+## IRC, freenode, #hurd, 2012-07-01
+
+ <mcsim> youpi: Do you think you could review my code?
+ <youpi> sure, just post it to the list
+ <youpi> make sure to break it down into logical pieces
+ <mcsim> youpi: I pushed it my branch at gnumach repository
+ <mcsim> youpi: or it is still better to post changes to list?
+ <youpi> posting to the list would permit feedback from other people too
+ <youpi> mcsim: posix distinguishes normal, sequential and random
+ <youpi> we should probably too
+ <youpi> the system call should probably be named "vm_advise", to be a verb
+ like allocate etc.
+ <mcsim> youpi: ok. A have a talk with antrik regarding naming, I'll change
+ this later because compiling of glibc take a lot of time.
+ <youpi> mcsim: I find it odd that vm_for_every_page allocates non-existing
+ pages
+ <youpi> there should probably be at least a flag to request it or not
+ <mcsim> youpi: normal policy is synonym to default. And this could be
+ treated as either random or sequential, isn't it?
+ <braunr> mcsim: normally, no
+ <youpi> yes, the normal policy would be the default
+ <youpi> it doesn't mean random or sequential
+ <youpi> it's just to be a compromise between both
+ <youpi> random is meant to make no read-ahead, since that'd be spurious
+ anyway
+ <youpi> while by default we should make readahead
+ <braunr> and sequential makes even more aggressive readahead, which usually
+ implies a greater number of pages to fetch
+ <braunr> that's all
+ <youpi> yes
+ <youpi> well, that part is handled by the cluster_size parameter actually
+ <braunr> what about reading pages preceding the faulted paged ?
+ <mcsim> Shouldn't sequential clean some pages (if they, for example, are
+ not precious) that are placed before fault page?
+ <braunr> ?
+ <youpi> that could make sense, yes
+ <braunr> you lost me
+ <youpi> and something that you wouldn't to with the normal policy
+ <youpi> braunr: clear what has been read previously
+ <braunr> ?
+ <youpi> since the access is supposed to be sequential
+ <braunr> oh
+ <youpi> the application will proabably not re-read what was already read
+ <braunr> you mean to avoid caching it ?
+ <youpi> yes
+ <braunr> inactive memory is there for that
+ <youpi> while with the normal policy you'd assume that the application
+ might want to go back etc.
+ <youpi> yes, but you can help it
+ <braunr> yes
+ <youpi> instead of making other pages compete with it
+ <braunr> but then, it's for precious pages
+ <youpi> I have to say I don't know what a precious page it
+ <youpi> s
+ <youpi> does it mean dirty pages?
+ <braunr> no
+ <braunr> precious means cached pages
+ <braunr> "If precious is FALSE, the kernel treats the data as a temporary
+ and may throw it away if it hasn't been changed. If the precious value is
+ TRUE, the kernel treats its copy as a data repository and promises to
+ return it to the manager; the manager may tell the kernel to throw it
+ away instead by flushing and not cleaning the data"
+ <braunr> hm no
+ <braunr> precious means the kernel must keep it
+ <mcsim> youpi: According to vm_for_every_page. What kind of flag do you
+ suppose? If object is internal, I suppose not to cross the bound of
+ object, setting in_end appropriately in vm_calculate_clusters.
+ <mcsim> If object is external we don't know its actual size, so we should
+ make mo request first. And for this we should create fictitious pages.
+ <braunr> mcsim: but how would you implement this "cleaning" with sequential
+ ?
+ <youpi> mcsim: ah, ok, I thought you were allocating memory, but it's just
+ fictitious pages
+ <youpi> comment "Allocate a new page" should be fixed :)
+ <mcsim> braunr: I don't now how I will implement this specifically (haven't
+ tried yet), but I don't think that this is impossible
+ <youpi> braunr: anyway it's useful as an example where normal and
+ sequential would be different
+ <braunr> if it can be done simply
+ <braunr> because i can see more trouble than gains in there :)
+ <mcsim> braunr: ok :)
+ <braunr> mcsim: hm also, why fictitious pages ?
+ <braunr> fictitious pages should normally be used only when dealing with
+ memory mapped physically which is not real physical memory, e.g. device
+ memory
+ <mcsim> but vm_fault could occur when object represent some device memory.
+ <braunr> that's exactly why there are fictitious pages
+ <mcsim> at the moment of allocating of fictitious page it is not know what
+ backing store of object is.
+ <braunr> really ?
+ <braunr> damn, i've got used to UVM too much :/
+ <mcsim> braunr: I said something wrong?
+ <braunr> no no
+ <braunr> it's just that sometimes, i'm confusing details about the various
+ BSD implementations i've studied
+ <braunr> out-of-gsoc-topic question: besides network drivers, do you think
+ we'll have other drivers that will run in userspace and have to implement
+ memory mapping ? like framebuffers ?
+ <braunr> or will there be a translation layer such as storeio that will
+ handle mapping ?
+ <youpi> framebuffers typically will, yes
+ <youpi> that'd be antrik's work on drm
+ <braunr> hmm
+ <braunr> ok
+ <youpi> mcsim: so does the implementation work, and do you see performance
+ improvement?
+ <mcsim> youpi: I haven't tested it yet with large ext2 :/
+ <mcsim> youpi: I'm going to finish now moving of ext2 to new interface,
+ than other translators in hurd repository and than finish memory policies
+ in gnumach. Is it ok?
+ <youpi> which new interface?
+ <mcsim> Written by neal. I wrote some temporary code to make ext2 work with
+ it, but I'm going to change this now.
+ <youpi> you mean the old unapplied patch?
+ <mcsim> yes
+ <youpi> did you have a look at Karim's work?
+ <youpi> (I have to say I never found the time to check how it related with
+ neal's patch)
+ <mcsim> I found only his work in kernel. I didn't see his work in applying
+ of neal's patch.
+ <youpi> ok
+ <youpi> how do they relate with each other?
+ <youpi> (I have never actually looked at either of them :/)
+ <mcsim> his work in kernel and neal's patch?
+ <youpi> yes
+ <mcsim> They do not correlate with each other.
+ <youpi> ah, I must be misremembering what each of them do
+ <mcsim> in kam's patch was changes to support sequential reading in reverse
+ order (as in OSF/Mach), but posix does not support such behavior, so I
+ didn't implement this either.
+ <youpi> I can't find the pointer to neal's patch, do you have it off-hand?
+ <mcsim> http://comments.gmane.org/gmane.os.hurd.bugs/351
+ <youpi> thx
+ <youpi> I think we are not talking about the same patch from Karim
+ <youpi> I mean lists.gnu.org/archive/html/bug-hurd/2010-06/msg00023.html
+ <mcsim> I mean this patch:
+ http://lists.gnu.org/archive/html/bug-hurd/2010-06/msg00024.html
+ <mcsim> Oh.
+ <youpi> ok
+ <mcsim> seems, this is just the same
+ <youpi> yes
+ <youpi> from a non-expert view, I would have thought these patches play
+ hand in hand, do they really?
+ <mcsim> this patch is completely for kernel and neal's one is completely
+ for libpager.
+ <youpi> i.e. neal's fixes libpager, and karim's fixes the kernel
+ <mcsim> yes
+ <youpi> ending up with fixing the whole path?
+ <youpi> AIUI, karim's patch will be needed so that your increased readahead
+ will end up with clustered page request?
+ <mcsim> I will not use kam's patch
+ <youpi> is it not needed to actually get pages in together?
+ <youpi> how do you tell libpager to fetch pages together?
+ <youpi> about the cluster size, I'd say it shouldn't be specified at
+ vm_advise() level
+ <youpi> in other OSes, it is usually automatically tuned
+ <youpi> by ramping it up to a maximum readahead size (which, however, could
+ be specified)
+ <youpi> that's important for the normal policy, where there are typically
+ successive periods of sequential reads, but you don't know in advance for
+ how long
+ <mcsim> braunr said that there are legal issues with his code, so I cannot
+ use it.
+ <braunr> did i ?
+ <braunr> mcsim: can you give me a link to the code again please ?
+ <youpi> see above :)
+ <braunr> which one ?
+ <youpi> both
+ <youpi> they only differ by a typo
+ <braunr> mcsim: i don't remember saying that, do you have any link ?
+ <braunr> or log ?
+ <mcsim> sorry, can you rephrase "ending up with fixing the whole path"?
+ <mcsim> cluster_size in vm_advise also could be considered as advise
+ <braunr> no
+ <braunr> it must be the third time we're talking about this
+ <youpi> mcsim: I mean both parts would be needed to actually achieve
+ clustered i/o
+ <braunr> again, why make cluster_size a per object attribute ? :(
+ <youpi> wouldn't some objects benefit from bigger cluster sizes, while
+ others wouldn't?
+ <youpi> but again, I believe it should rather be autotuned
+ <youpi> (for each object)
+ <braunr> if we merely want posix compatibility (and for a first attempt,
+ it's quite enough), vm_advise is good, and the kernel selects the
+ implementation (and thus the cluster sizes)
+ <braunr> if we want finer grained control, perhaps a per pager cluster_size
+ would be good, although its efficiency depends on several parameters
+ <braunr> (e.g. where the page is in this cluster)
+ <braunr> but a per object cluster size is a large waste of memory
+ considering very few applications (if not none) would use the "feature"
+ ..
+ <braunr> (if any*)
+ <youpi> there must be a misunderstanding
+ <youpi> why would it be a waste of memory?
+ <braunr> "per object"
+ <youpi> so?
+ <braunr> there can be many memory objects in the kernel
+ <youpi> so?
+ <braunr> so such an overhead must be useful to accept it
+ <youpi> in my understanding, a cluster size per object is just a mere
+ integer for each object
+ <youpi> what overhead?
+ <braunr> yes
+ <youpi> don't we have just thousands of objects?
+ <braunr> for now
+ <braunr> remember we're trying to remove the page cache limit :)
+ <youpi> that still won't be more than tens of thousands of objects
+ <youpi> times an integer
+ <youpi> that's completely neglectible
+ <mcsim> braunr: Strange, Can't find in logs. Weird things are happening in
+ my memory :/ Sorry.
+ <braunr> mcsim: i'm almost sure i never said that :/
+ <braunr> but i don't trust my memory too much either
+ <braunr> youpi: depends
+ <youpi> mcsim: I mean both parts would be needed to actually achieve
+ clustered i/o
+ <mcsim> braunr: I made I call vm_advise that applies policy to memory range
+ (vm_map_entry to be specific)
+ <braunr> mcsim: good
+ <youpi> actually the cluster size should even be per memory range
+ <mcsim> youpi: In this sense, yes
+ <youpi> k
+ <mcsim> sorry, Internet connection lags
+ <braunr> when changing a structure used to create many objects, keep in
+ mind one thing
+ <braunr> if its size gets larger than a threshold (currently, powers of
+ two), the cache used by the slab allocator will allocate twice the
+ necessary amount
+ <youpi> sure
+ <braunr> this is the case with most object caching allocators, although
+ some can have specific caches for common sizes such as 96k which aren't
+ powers of two
+ <braunr> anyway, an integer is negligible, but the final structure size
+ must be checked
+ <braunr> (for both 32 and 64 bits)
+ <mcsim> braunr: ok.
+ <mcsim> But I didn't understand what should be done with cluster size in
+ vm_advise? Should I delete it?
+ <braunr> to me, the cluster size is a pager property
+ <youpi> to me, the cluster size is a map property
+ <braunr> whereas vm_advise indicates what applications want
+ <youpi> you could have several process accessing the same file in different
+ ways
+ <braunr> youpi: that's why there is a policy
+ <youpi> isn't cluster_size part of the policy?
+ <braunr> but if the pager abilities are limited, it won't change much
+ <braunr> i'm not sure
+ <youpi> cluster_size is the amount of readahead, isn't it?
+ <braunr> no, it's the amount of data in a single transfer
+ <mcsim> Yes, it is.
+ <braunr> ok, i'll have to check your code
+ <youpi> shouldn't transfers permit unbound amounts of data?
+ <mcsim> braunr: than I misunderstand what readahead is
+ <braunr> well then cluster size is per policy :)
+ <braunr> e.g. random => 0, normal => 3, sequential => 15
+ <braunr> why make it per map entry ?
+ <youpi> because it depends on what the application doezs
+ <braunr> let me check the code
+ <youpi> if it's accessing randomly, no need for big transfers
+ <youpi> just page transfers will be fine
+ <youpi> if accessing sequentially, rather use whole MiB of transfers
+ <youpi> and these behavior can be for the same file
+ <braunr> mcsim: the call is vm_advi*s*e
+ <braunr> mcsim: the call is vm_advi_s_e
+ <braunr> not advice
+ <youpi> yes, he agreed earlier
+ <braunr> ok
+ <mcsim> cluster_size is the amount of data that I try to read at one time.
+ <mcsim> at singe mo_data_request
+ <mcsim> *single
+ <youpi> which, to me, will depend on the actual map
+ <braunr> ok so it is the transfer size
+ <youpi> and should be autotuned, especially for normal behavior
+ <braunr> youpi: it makes no sense to have both the advice and the actual
+ size per map entry
+ <youpi> to get big readahead with all apps
+ <youpi> braunr: the size is not only dependent on the advice, but also on
+ the application behavior
+ <braunr> youpi: how does this application tell this ?
+ <youpi> even for sequential, you shouldn't necessarily use very big amounts
+ of transfers
+ <braunr> there is no need for the advice if there is a cluster size
+ <youpi> there can be, in the case of sequential, as we said, to clear
+ previous pages
+ <youpi> but otherwise, indeed
+ <youpi> but for me it's the converse
+ <youpi> the cluster size should be tuned anyway
+ <braunr> and i'm against giving the cluster size in the advise call, as we
+ may want to prefetch previous data as well
+ <youpi> I don't see how that collides
+ <braunr> well, if you consider it's the transfer size, it doesn't
+ <youpi> to me cluster size is just the size of a window
+ <braunr> if you consider it's the amount of pages following a faulted page,
+ it will
+ <braunr> also, if your policy says e.g. "3 pages before, 10 after", and
+ your cluster size is 2, what happens ?
+ <braunr> i would find it much simpler to do what other VM variants do:
+ compute the I/O sizes directly from the policy
+ <youpi> don't they autotune, and use the policy as a maximum ?
+ <braunr> depends on the implementations
+ <youpi> ok, but yes I agree
+ <youpi> although casting the size into stone in the policy looks bogus to
+ me
+ <braunr> but making cluster_size part of the kernel interface looks way too
+ messy
+ <braunr> it is
+ <braunr> that's why i would have thought it as part of the pager properties
+ <braunr> the pager is the true component besides the kernel that is
+ actually involved in paging ...
+ <youpi> well, for me the flexibility should still be per application
+ <youpi> by pager you mean the whole pager, not each file, right?
+ <braunr> if a pager can page more because e.g. it's a file system with big
+ block sizes, why not fetch more ?
+ <braunr> yes
+ <braunr> it could be each file
+ <braunr> but only if we have use for it
+ <braunr> and i don't see that currently
+ <youpi> well, posix currently doesn't provide a way to set it
+ <youpi> so it would be useless atm
+ <braunr> i was thinking about our hurd pagers
+ <youpi> could we perhaps say that the policy maximum could be a fraction of
+ available memory?
+ <braunr> why would we want that ?
+ <youpi> (total memory, I mean)
+ <youpi> to make it not completely cast into stone
+ <youpi> as have been in the past in gnumach
+ <braunr> i fail to understand :/
+ <youpi> there must be a misunderstanding then
+ <youpi> (pun not intended)
+ <braunr> why do you want to limit the policy maximum ?
+ <youpi> how to decide it?
+ <braunr> the pager sets it
+ <youpi> actually I don't see how a pager could decide it
+ <youpi> on what ground does it make the decision?
+ <youpi> readahead should ideally be as much as 1MiB
+ <braunr> 02:02 < braunr> if a pager can page more because e.g. it's a file
+ system with big block sizes, why not fetch more ?
+ <braunr> is the example i have in mind
+ <braunr> otherwise some default values
+ <youpi> that's way smaller than 1MiB, isn't it?
+ <braunr> yes
+ <braunr> and 1 MiB seems a lot to me :)
+ <youpi> for readahead, not really
+ <braunr> maybe for sequential
+ <youpi> that's what we care about!
+ <braunr> ah, i thought we cared about normal
+ <youpi> "as much as 1MiB", I said
+ <youpi> I don't mean normal :)
+ <braunr> right
+ <braunr> but again, why limit ?
+ <braunr> we could have 2 or more ?
+ <youpi> at some point you don't get more efficiency
+ <youpi> but eat more memory
+ <braunr> having the pager set the amount allows us to easily adjust it over
+ time
+ <mcsim> braunr: Do you think that readahead should be implemented in
+ libpager?
+ <youpi> than needed
+ <braunr> mcsim: no
+ <braunr> mcsim: err
+ <braunr> mcsim: can't answer
+ <youpi> mcsim: do you read the log of what you have missed during
+ disconnection?
+ <braunr> i'm not sure about what libpager does actually
+ <mcsim> yes
+ <braunr> for me it's just mutualisation of code used by pagers
+ <braunr> i don't know the details
+ <braunr> youpi: yes
+ <braunr> youpi: that's why we want these values not hardcoded in the kernel
+ <braunr> youpi: so that they can be adjusted by our shiny user space OS
+ <youpi> (btw apparently linux uses minimum 16k, maximum 128 or 256k)
+ <braunr> that's more reasonable
+ <youpi> that's just 4 times less :)
+ <mcsim> braunr: You say that pager should decide how much data should be
+ read ahead, but each pager can't implement it on it's own as there will
+ be too much overhead. So the only way is to implement this in libpager.
+ <braunr> mcsim: gni ?
+ <braunr> why couldn't they ?
+ <youpi> mcsim: he means the size, not the actual implementation
+ <youpi> the maximum size, actually
+ <braunr> actually, i would imagine it as the pager giving per policy
+ parameters
+ <youpi> right
+ <braunr> like how many before and after
+ <youpi> I agree, then
+ <braunr> the kernel could limit, sure, to avoid letting pagers use
+ completely insane values
+ <youpi> (and that's just a max, the kernel autotunes below that)
+ <braunr> why not
+ <youpi> that kernel limit could be a fraction of memory, then?
+ <braunr> it could, yes
+ <braunr> i see what you mean now
+ <youpi> mcsim: did you understand our discussion?
+ <youpi> don't hesitate to ask for clarification
+ <mcsim> I supposed cluster_size to be such parameter. And advice will help
+ to interpret this parameter (whether data should be read after fault page
+ or some data should be cleaned before)
+ <youpi> mcsim: we however believe that it's rather the pager than the
+ application that would tell that
+ <youpi> at least for the default values
+ <youpi> posix doesn't have a way to specify it, and I don't think it will
+ in the future
+ <braunr> and i don't think our own hurd-specific programs will need more
+ than that
+ <braunr> if they do, we can slightly change the interface to make it a per
+ object property
+ <braunr> i've checked the slab properties, and it seems we can safely add
+ it per object
+ <braunr> cf http://www.sceen.net/~rbraun/slabinfo.out
+ <braunr> so it would still be set by the pager, but if depending on the
+ object, the pager could set different values
+ <braunr> youpi: do you think the pager should just provide one maximum size
+ ? or per policy sizes ?
+ <youpi> I'd say per policy size
+ <youpi> so people can increase sequential size like crazy when they know
+ their sequential applications need it, without disturbing the normal
+ behavior
+ <braunr> right
+ <braunr> so the last decision is per pager or per object
+ <braunr> mcsim: i'd say whatever makes your implementation simpler :)
+ <mcsim> braunr: how kernel knows that object are created by specific pager?
+ <braunr> that's the kind of things i'm referring to with "whatever makes
+ your implementation simpler"
+ <braunr> but usually, vm_objects have an ipc port and some properties
+ relatedto their pagers
+ <braunr> -usually
+ <braunr> the problem i had in mind was the locking protocol but our spin
+ locks are noops, so it will be difficult to detect deadlocks
+ <mcsim> braunr: and for every policy there should be variable in vm_object
+ structure with appropriate cluster_size?
+ <braunr> if you want it per object, yes
+ <braunr> although i really don't think we want it
+ <youpi> better keep it per pager for now
+ <braunr> let's imagine youpi finishes his 64-bits support, and i can
+ successfully remove the page cache limit
+ <braunr> we'd jump from 1.8 GiB at most to potentially dozens of GiB of RAM
+ <braunr> and 1.8, mostly unused
+ <braunr> to dozens almost completely used, almost all the times for the
+ most interesting use cases
+ <braunr> we may have lots and lots of objects to keep around
+ <braunr> so if noone really uses the feature ... there is no point
+ <youpi> but also lots and lots of memory to spend on it :)
+ <youpi> a lot of objects are just one page, but a lof of them are not
+ <braunr> sure
+ <braunr> we wouldn't be doing that otherwise :)
+ <braunr> i'm just saying there is no reason to add the overhead of several
+ integers for each object if they're simply not used at all
+ <braunr> hmm, 64-bits, better page cache, clustered paging I/O :>
+ <braunr> (and readahead included in the last ofc)
+ <braunr> good night !
+ <mcsim> than, probably, make system-global max-cluster_size? This will save
+ some memory. Also there is usually no sense in reading really huge chunks
+ at once.
+ <youpi> but that'd be tedious to set
+ <youpi> there are only a few pagers, that's no wasted memory
+ <youpi> the user being able to set it for his own pager is however a very
+ nice feature, which can be very useful for databases, image processing,
+ etc.
+ <mcsim> In conclusion I have to implement following: 3 memory policies per
+ object and per vm_map_entry. Max cluster size for every policy should be
+ set per pager.
+ <mcsim> So, there should be 2 system calls for setting memory policy and
+ one for setting cluster sizes.
+ <mcsim> Also amount of data to transfer should be tuned automatically by
+ every page fault.
+ <mcsim> youpi: Correct me, please, if I'm wrong.
+ <youpi> I believe that's what we ended up to decide, yes
+
+
+## IRC, freenode, #hurd, 2012-07-02
+
+ <braunr> is it safe to say that all memory objects implemented by external
+ pagers have "file" semantics ?
+ <braunr> i wonder if the current memory manager interface is suitable for
+ device pagers
+ <mcsim> braunr: What does "file" semantics mean?
+ <braunr> mcsim: anonymous memory doesn't have the same semantics as a file
+ for example
+ <braunr> anonymous memory that is discontiguous in physical memory can be
+ contiguous in swap
+ <braunr> and its location can change with time
+ <braunr> whereas with a memory object, the data exchanged with pagers is
+ identified with its offset
+ <braunr> in (probably) all other systems, this way of specifying data is
+ common to all files, whatever the file system
+ <braunr> linux uses the struct vm_file name, while in BSD/Solaris they are
+ called vnodes (the link between a file system inode and virtual memory)
+ <braunr> my question is : can we implement external device pagers with the
+ current interface, or is this interface really meant for files ?
+ <braunr> also
+ <braunr> mcsim: something about what you said yesterday
+ <braunr> 02:39 < mcsim> In conclusion I have to implement following: 3
+ memory policies per object and per vm_map_entry. Max cluster size for
+ every policy should be set per pager.
+ <braunr> not per object
+ <braunr> one policy per map entry
+ <braunr> transfer parameters (pages before and after the faulted page) per
+ policy, defined by pagers
+ <braunr> 02:39 < mcsim> So, there should be 2 system calls for setting
+ memory policy and one for setting cluster sizes.
+ <braunr> adding one call for vm_advise is good because it mirrors the posix
+ call
+ <braunr> but for the parameters, i'd suggest changing an already existing
+ call
+ <braunr> not sure which one though
+ <mcsim> braunr: do you know how mo_change_attributes implemented in
+ OSF/Mach?
+ <braunr> after a quick reading of the reference manual, i think i
+ understand why they made it per object
+ <braunr> mcsim: no
+ <braunr> did they change the call to include those paging parameters ?
+ <mcsim> it accept two parameters: flavor and pointer to structure with
+ parameters.
+ <mcsim> flavor determines semantics of structure with parameters.
+ <mcsim>
+ http://www.darwin-development.org/cgi-bin/cvsweb/osfmk/src/mach_kernel/vm/memory_object.c?rev=1.1
+ <mcsim> structure can have 3 different views and what exect view will be is
+ determined by value of flavor
+ <mcsim> So, I thought about implementing similar call that could be used
+ for various purposes.
+ <mcsim> like ioctl
+ <braunr> "pointer to structure with parameters" <= which one ?
+ <braunr> mcsim: don't model anything anywhere like ioctl please
+ <mcsim> memory_object_info_t attributes
+ <braunr> ioctl is the very thing we want NOT to have on the hurd
+ <braunr> ok attributes
+ <braunr> and what are the possible values of flavour, and what kinds of
+ attributes ?
+ <mcsim> and then appears something like this on each case: behave =
+ (old_memory_object_behave_info_t) attributes;
+ <braunr> ok i see
+ <mcsim> flavor could be OLD_MEMORY_OBJECT_BEHAVIOR_INFO,
+ MEMORY_OBJECT_BEHAVIOR_INFO, MEMORY_OBJECT_PERFORMANCE_INFO etc
+ <braunr> i don't really see the point of flavour here, other than
+ compatibility
+ <braunr> having attributes is nice, but you should probably add it as a
+ call parameter, not inside a structure
+ <braunr> as a general rule, we don't like passing structures too much
+ to/from the kernel, because handling them with mig isn't very clean
+ <mcsim> ok
+ <mcsim> What policy parameters should be defined by pager?
+ <braunr> i'd say number of pages to page-in before and after the faulted
+ page
+ <mcsim> Only pages before and after the faulted page?
+ <braunr> for me yes
+ <braunr> youpi might have different things in mind
+ <braunr> the page cleaning in sequential mode is something i wouldn't do
+ <braunr> 1/ applications might want data read sequentially to remain in the
+ cache, for other sequential accesses
+ <braunr> 2/ applications that really don't want to cache anything should
+ use O_DIRECT
+ <braunr> 3/ it's complicated, and we're in july
+ <braunr> i'd rather have a correct and stable result than too many unused
+ features
+ <mcsim> braunr: MADV_SEQUENTIAL Expect page references in sequential order.
+ (Hence, pages in the given range can be aggressively read ahead, and may
+ be freed soon after they are accessed.)
+ <mcsim> this is from linux man
+ <mcsim> braunr: Can I at least make keeping in mind that it could be
+ implemented?
+ <mcsim> I mean future rpc interface
+ <mcsim> braunr: From behalf of kernel pager is just a port.
+ <mcsim> That's why it is not clear for me how I can make in kernel
+ per-pager policy
+ <braunr> mcsim: you can't
+ <braunr> 15:19 < braunr> after a quick reading of the reference manual, i
+ think i understand why they made it per object
+ <braunr>
+ http://pubs.opengroup.org/onlinepubs/009695399/functions/posix_madvise.html
+ <braunr> POSIX_MADV_SEQUENTIAL
+ <braunr> Specifies that the application expects to access the specified
+ range sequentially from lower addresses to higher addresses.
+ <braunr> linux might free pages after their access, why not, but this is
+ entirely up to the implementation
+ <mcsim> I know, when but applications might want data read sequentially to
+ remain in the cache, for other sequential accesses this kind of access
+ could be treated rather normal or random
+ <braunr> we can do differently
+ <braunr> mcsim: no
+ <braunr> sequential means the access will be sequential
+ <braunr> so aggressive readahead (e.g. 0 pages before, many after), should
+ be used
+ <braunr> for better performance
+ <braunr> from my pov, it has nothing to do with caching
+ <braunr> i actually sometimes expect data to remain in cache
+ <braunr> e.g. before playing a movie from sshfs, i sometimes prefetch it
+ using dd
+ <braunr> then i use mplayer
+ <braunr> i'd be very disappointed if my data didn't remain in the cache :)
+ <mcsim> At least these pages could be placed into inactive list to be first
+ candidates for pageout.
+ <braunr> that's what will happen by default
+ <braunr> mcsim: if we need more properties for memory objects, we'll adjust
+ the call later, when we actually implement them
+ <mcsim> so, first call is vm_advise and second is changed
+ mo_change_attributes?
+ <braunr> yes
+ <mcsim> there will appear 3 new parameters in mo_c_a: policy, pages before
+ and pages after?
+ <mcsim> braunr: With vm_advise I didn't understand one thing. This call is
+ defined in defs file, so that should mean that vm_advise is ordinal rpc
+ call. But on the same time it is defined as syscall in mach internals (in
+ mach_trap_table).
+ <braunr> mcsim: what ?
+ <braunr> were is it "defined" ? (it doesn't exit in gnumach currently)
+ <mcsim> Ok, let consider vm_map
+ <mcsim> I define it both in mach_trap_table and in defs file.
+ <mcsim> But why?
+ <braunr> uh ?
+ <braunr> let me see
+ <mcsim> Why defining in defs file is not enough?
+ <mcsim> and previous question: there will appear 3 new parameters in
+ mo_c_a: policy, pages before and pages after?
+ <braunr> mcsim: give me the exact file paths please
+ <braunr> mcsim: we'll discuss the new parameters after
+ <mcsim> kern/syscall_sw.c
+ <braunr> right i see
+ <mcsim> here mach_trap_table in defined
+ <braunr> i think they're not used
+ <braunr> they were probably introduced for performance
+ <mcsim> and ./include/mach/mach.defs
+ <braunr> don't bother adding vm_advise as a syscall
+ <braunr> about the parameters, it's a bit more complicated
+ <braunr> you should add 6 parameters
+ <braunr> before and after, for the 3 policies
+ <braunr> but
+ <braunr> as seen in the posix page, there could be more policies ..
+ <braunr> ok forget what i said, it's stupid
+ <braunr> yes, the 3 parameters you had in mind are correct
+ <braunr> don't forget a "don't change" value for the policy though, so the
+ kernel ignores the before/after values if we don't want to change that
+ <mcsim> ok
+ <braunr> mcsim: another reason i asked about "file semantics" is the way we
+ handle the cache
+ <braunr> mcsim: file semantics imply data is cached, whereas anonymous and
+ device memory usually isn't
+ <braunr> (although having the cache at the vm layer instead of the pager
+ layer allows nice things like the swap cache)
+ <mcsim> But this shouldn't affect possibility of implementing of device
+ pager.
+ <braunr> yes it may
+ <braunr> consider how a fault is actually handled by a device
+ <braunr> mach must use weird fictitious pages for that
+ <braunr> whereas it would be better to simply let the pager handle the
+ fault as it sees fit
+ <mcsim> setting may_cache to false should resolve the issue
+ <braunr> for the caching problem, yes
+ <braunr> which is why i still think it's better to handle the cache at the
+ vm layer, unlike UVM which lets the vnode pager handle its own cache, and
+ removes the vm cache completely
+ <mcsim> The only issue with pager interface I see is implementing of
+ scatter-gather DMA (as current interface does not support non-consecutive
+ access)
+ <braunr> right
+ <braunr> but that's a performance issue
+ <braunr> my problem with device pagers is correctness
+ <braunr> currently, i think the kernel just asks pagers for "data"
+ <braunr> whereas a device pager should really map its device memory where
+ the fault happen
+ <mcsim> braunr: You mean that every access to memory should cause page
+ fault?
+ <mcsim> I mean mapping of device memory
+ <braunr> no
+ <braunr> i mean a fault on device mapped memory should directly access a
+ shared region
+ <braunr> whereas file pagers only implement backing store
+ <braunr> let me explain a bit more
+ <braunr> here is what happens with file mapped memory
+ <braunr> you map it, access it (some I/O is done to get the page content in
+ physical memory), then later it's flushed back
+ <braunr> whereas with device memory, there shouldn't be any I/O, the device
+ memory should directly be mapped (well, some devices need the same
+ caching behaviour, while others provide direct access)
+ <braunr> one of the obvious consequences is that, when you map device
+ memory (e.g. a framebuffer), you expect changes in your mapped memory to
+ be effective right away
+ <braunr> while with file mapped memory, you need to msync() it
+ <braunr> (some framebuffers also need to be synced, which suggests greater
+ control is needed for external pagers)
+ <mcsim> Seems that I understand you. But how it is implemented in other
+ OS'es? Do they set something in mmu?
+ <braunr> mcsim: in netbsd, pagers have a fault operatin in addition to get
+ and put
+ <braunr> the device pager sets get and put to null and implements fault
+ only
+ <braunr> the fault callback then calls the d_mmap callback of the specific
+ driver
+ <braunr> which usually results in the mmu being programmed directly
+ <braunr> (e.g. pmap_enter or similar)
+ <braunr> in linux, i think raw device drivers, being implemented as
+ character device files, must provide raw read/write/mmap/etc.. functions
+ <braunr> so it looks pretty much similar
+ <braunr> i'd say our current external pager interface is insufficient for
+ device pagers
+ <braunr> but antrik may know more since he worked on ggi
+ <braunr> antrik: ^
+ <mcsim> braunr: Seems he used io_map
+ <braunr> mcsim: where ar eyou looking at ? the incubator ?
+ <mcsim> his master's thesis
+ <braunr> ah the thesis
+ <braunr> but where ? :)
+ <mcsim> I'll give you a link
+ <mcsim> http://dl.dropbox.com/u/36519904/kgi_on_hurd.pdf
+ <braunr> thanks
+ <mcsim> see p 158
+ <braunr> arg, more than 200 pages, and he says he's lazy :/
+ <braunr> mcsim: btw, have a look at m_o_ready
+ <mcsim> braunr: This is old form of mo_change attributes
+ <mcsim> I'm not going to change it
+ <braunr> mcsim: these are actually the default object parameters right ?
+ <braunr> mcsim: if you don't change it, it means the kernel must set
+ default values until the pager changes them, if it does
+ <mcsim> yes.
+ <antrik> mcsim: madvise() on Linux has a separate flag to indicate that
+ pages won't be reused. thus I think it would *not* be a good idea to
+ imply it in SEQUENTIAL
+ <antrik> braunr: yes, my KMS code relies on mapping memory objects for the
+ framebuffer
+ <antrik> (it should be noted though that on "modern" hardware, mapping
+ graphics memory directly usually gives very poor performance, and drivers
+ tend to avoid it...)
+ <antrik> mcsim: BTW, it was most likely me who warned about legal issues
+ with KAM's work. AFAIK he never managed to get the copyright assignment
+ done :-(
+ <antrik> (that's not really mandatory for the gnumach work though... only
+ for the Hurd userspace parts)
+ <antrik> also I'd like to point out again that the cluster_size argument
+ from OSF Mach was probably *not* meant for advice from application
+ programs, but rather was supposed to reflect the cluster size of the
+ filesystem in question. at least that sounds much more plausible to me...
+ <antrik> braunr: I have no idea whay you mean by "device pager". device
+ memory is mapped once when the VM mapping is established; there is no
+ need for any fault handling...
+ <antrik> mcsim: to be clear, I think the cluster_size parameter is mostly
+ orthogonal to policy... and probably not very useful at all, as ext2
+ almost always uses page-sized clusters. I'm strongly advise against
+ bothering with it in the initial implementation
+ <antrik> mcsim: to avoid confusion, better use a completely different name
+ for the policy-decided readahead size
+ <mcsim> antrik: ok
+ <antrik> braunr: well, yes, the thesis report turned out HUGE; but the
+ actual work I did on the KGI port is fairly tiny (not more than a few
+ weeks of actual hacking... everything else was just brooding)
+ <antrik> braunr: more importantly, it's pretty much the last (and only
+ non-trivial) work I did on the Hurd :-(
+ <antrik> (also, I don't think I used the word "lazy"... my problem is not
+ laziness per se; but rather inability to motivate myself to do anything
+ not providing near-instant gratification...)
+ <braunr> antrik: right
+ <braunr> antrik: i shouldn't consider myself lazy either
+ <braunr> mcsim: i agree with antrik, as i told you weeks ago
+ <braunr> about
+ <braunr> 21:45 < antrik> mcsim: to be clear, I think the cluster_size
+ parameter is mostly orthogonal to policy... and probably not very useful
+ at all, as ext2 almost always uses page-sized clusters. I'm strongly
+ advise against bothering with it
+ <braunr> in the initial implementation
+ <braunr> antrik: but how do you actually map device memory ?
+ <braunr> also, strangely enough, here is the comment in dragonflys
+ madvise(2)
+ <braunr> 21:45 < antrik> mcsim: to be clear, I think the cluster_size
+ parameter is mostly orthogonal to policy... and probably not very useful
+ at all, as ext2 almost always uses page-sized clusters. I'm strongly
+ advise against bothering with it
+ <braunr> in the initial implementation
+ <braunr> arg
+ <braunr> MADV_SEQUENTIAL Causes the VM system to depress the priority of
+ pages immediately preceding a given page when it is faulted in.
+ <antrik> braunr: interesting...
+ <antrik> (about SEQUENTIAL on dragonfly)
+ <antrik> as for mapping device memory, I just use to device_map() on the
+ mem device to map the physical address space into a memory object, and
+ then through vm_map into the driver (and sometimes application) address
+ space
+ <antrik> formally, there *is* a pager involved of course (implemented
+ in-kernel by the mem device), but it doesn't really do anything
+ interesting
+ <antrik> thinking about it, there *might* actually be page faults involved
+ when the address ranges are first accessed... but even then, the handling
+ is really trivial and not terribly interesting
+ <braunr> antrik: it does the most interesting part, create the physical
+ mapping
+ <braunr> and as trivial as it is, it requires a special interface
+ <braunr> i'll read about device_map again
+ <braunr> but yes, the fact that it's in-kernel is what solves the problem
+ here
+ <braunr> what i'm interested in is to do it outside the kernel :)
+ <antrik> why would you want to do that?
+ <antrik> there is no policy involved in doing an MMIO mapping
+ <antrik> you ask for the pysical memory region you are interested in, and
+ that's it
+ <antrik> whether the kernel adds the page table entries immediately or on
+ faults is really an implementation detail
+ <antrik> braunr: ^
+ <braunr> yes it's a detail
+ <braunr> but do we currently have the interface to make such mappings from
+ userspace ?
+ <braunr> and i want to do that because i'd like as many drivers as possible
+ outside the kernel of course
+ <antrik> again, the userspace driver asks the kernel to establish the
+ mapping (through device_map() and then vm_map() on the resulting memory
+ object)
+ <braunr> hm i'm missing something
+ <braunr>
+ http://www.gnu.org/software/hurd/gnumach-doc/Device-Map.html#Device-Map
+ <= this one ?
+ <antrik> yes, this one
+ <braunr> but this implies the device is implemented by the kernel
+ <antrik> the mem device is, yes
+ <antrik> but that's not a driver
+ <braunr> ah
+ <antrik> it's just the interface for doing MMIO
+ <antrik> (well, any physical mapping... but MMIO is probably the only real
+ use case for that)
+ <braunr> ok
+ <braunr> i was thinking about completely removing the device interface from
+ the kernel actually
+ <braunr> but it makes sense to have such devices there
+ <antrik> well, in theory, specific kernel drivers can expose their own
+ device_map() -- but IIRC the only one that does (besides mem of course)
+ is maptime -- which is not a real driver either...
+ <braunr> oh btw, i didn't know you had a blog :)
+ <antrik> well, it would be possible to replace the device interface by
+ specific interfaces for the generic pseudo devices... I'm not sure how
+ useful that would be
+ <braunr> there are lots of interesting stuff there
+ <antrik> hehe... another failure ;-)
+ <braunr> failure ?
+ <antrik> well, when I realized that I'm speding a lot of time pondering
+ things, and never can get myself to actually impelemnt any of them, I had
+ the idea that if I write them down, there might at least be *some* good
+ from it...
+ <antrik> unfortunately it turned out that I need so much effort to write
+ things down, that most of the time I can't get myself to do that either
+ :-(
+ <braunr> i see
+ <braunr> well it's still nice to have it
+ <antrik> (notice that the latest entry is two years old... and I haven't
+ even started describing most of my central ideas :-( )
+ <braunr> antrik: i tried to create a blog once, and found what i wrote so
+ stupid i immediately removed it
+ <antrik> hehe
+ <antrik> actually some of my entries seem silly in retrospect as well...
+ <antrik> but I guess that's just the way it is ;-)
+ <braunr> :)
+ <braunr> i'm almost sure other people would be interested in what i had to
+ say
+ <antrik> BTW, I'm actually not sure whether the Mach interfaces are
+ sufficient to implement GEM/TTM... we would certainly need kernel support
+ for GART (as for any other kind IOMMU in fact); but beyond that it's not
+ clear to me
+ <braunr> GEM ? TTM ? GART ?
+ <antrik> GEM = Graphics Execution Manager. part of the "new" DRM interface,
+ closely tied with KMS
+ <antrik> TTM = Translation Table Manager. does part of the background work
+ for most of the GEM drivers
+ <braunr> "The Graphics Execution Manager (GEM) is a computer software
+ system developed by Intel to do memory management for device drivers for
+ graphics chipsets." hmm
+ <antrik> (in fact it was originally meant to provide the actual interface;
+ but the Inter folks decided that it's not useful for their UMA graphics)
+ <antrik> GART = Graphics Aperture
+ <antrik> kind of an IOMMU for graphics cards
+ <antrik> allowing the graphics card to work with virtual mappings of main
+ memory
+ <antrik> (i.e. allowing safe DMA)
+ <braunr> ok
+ <braunr> all this graphics stuff looks so complex :/
+ <antrik> it is
+ <antrik> I have a whole big chapter on that in my thesis... and I'm not
+ even sure I got everything right
+ <braunr> what is nvidia using/doing (except for getting the finger) ?
+ <antrik> flushing out all the details for KMS, GEM etc. took the developers
+ like two years (even longer if counting the history of TTM)
+ <antrik> Nvidia's proprietary stuff uses a completely own kernel interface,
+ which is of course not exposed or docuemented in any way... but I guess
+ it's actually similar in what it does)
+ <braunr> ok
+ <antrik> (you could ask the nouveau guys if you are truly
+ interested... they are doing most of their reverse engineering at the
+ kernel interface level)
+ <braunr> it seems graphics have very special needs, and a lot of them
+ <braunr> and the interfaces are changing often
+ <braunr> so it's not that much interesting currently
+ <braunr> it just means we'll probably have to change the mach interface too
+ <braunr> like you said
+ <braunr> so the answer to my question, which was something like "do mach
+ external pagers only implement files ?", is likely yes
+ <antrik> well, KMS/GEM had reached some stability; but now there are
+ further changes ahead with the embedded folks coming in with all their
+ dedicated hardware, calling for unified buffer management across the
+ whole pipeline (from capture to output)
+ <antrik> and yes: graphics hardware tends to be much more complex regarding
+ the interface than any other hardware. that's because it's a combination
+ of actual I/O (like most other devices) with a very powerful coprocessor
+ <antrik> and the coprocessor part is pretty much unique amongst peripherial
+ devices
+ <antrik> (actually, the I/O part is also much more complex than most other
+ hardware... but that alone would only require a more complex driver, not
+ special interfaces)
+ <antrik> embedded hardware makes it more interesting in that the I/O
+ part(s) are separate from the coprocessor ones; and that there are often
+ several separate specialised ones of each... the DRM/KMS stuff is not
+ prepared to deal with this
+ <antrik> v4l over time has evolved to cover such things; but it's not
+ really the right place to implement graphics drivers... which is why
+ there are not efforts to unify these frameworks. funny times...
+
+
+## IRC, freenode, #hurd, 2012-07-03
+
+ <braunr> mcsim: vm_for_every_page should be static
+ <mcsim> braunr: ok
+ <braunr> mcsim: see http://gcc.gnu.org/onlinedocs/gcc/Inline.html
+ <braunr> and it looks big enough that you shouldn't make it inline
+ <braunr> let the compiler decide for you (which is possible only if the
+ function is static)
+ <braunr> (otherwise a global symbol needs to exist)
+ <braunr> mcsim: i don't know where you copied that comment from, but you
+ should review the description of the vm_advice call in mach.Defs
+ <mcsim> braunr: I see
+ <mcsim> braunr: It was vm_inherit :)
+ <braunr> mcsim: why isn't NORMAL defined in vm_advise.h ?
+ <braunr> mcsim: i figured actually ;)
+ <mcsim> braunr: I was going to do it later when.
+ <braunr> mcsim: for more info on inline, see
+ http://www.kernel.org/doc/Documentation/CodingStyle
+ <braunr> arg that's an old one
+ <mcsim> braunr: I know that I do not follow coding style
+ <braunr> mcsim: this one is about linux :p
+ <braunr> mcsim: http://lxr.linux.no/linux/Documentation/CodingStyle should
+ have it
+ <braunr> mcsim: "Chapter 15: The inline disease"
+ <mcsim> I was going to fix it later during refactoring when I'll merge
+ mplaneta/gsoc12/working to mplaneta/gsoc12/master
+ <braunr> be sure not to forget :p
+ <braunr> and the best not to forget is to do it asap
+ <braunr> +way
+ <mcsim> As to inline. I thought that even if I specify function as inline
+ gcc makes final decision about it.
+ <mcsim> There was a specifier that made function always inline, AFAIR.
+ <braunr> gcc can force a function not to be inline, yes
+ <braunr> but inline is still considered as a strong hint
+
+
+## IRC, freenode, #hurd, 2012-07-05
+
+ <mcsim1> braunr: hello. You've said that pager has to supply 2 values to
+ kernel to give it an advice how execute page fault. These two values
+ should be number of pages before and after the page where fault
+ occurred. But for sequential policy number of pager before makes no
+ sense. For random policy too. For normal policy it would be sane to make
+ readahead symmetric. Probably it would be sane to make pager supply
+ cluster_size (if it is necessary to supply any) that w
+ <mcsim1> *that will be advice for kernel of least sane value? And maximal
+ value will be f(free_memory, map_entry_size)?
+ <antrik> mcsim1: I doubt symmetric readahead would be a good default
+ policy... while it's hard to estimate an optimum over all typical use
+ cases, I'm pretty sure most situtations will benefit almost exclusively
+ from reading following pages, not preceeding ones
+ <antrik> I'm not even sure it's useful to read preceding pages at all in
+ the default policy -- the use cases are probably so rare that the penalty
+ in all other use cases is not justified. I might be wrong on that
+ though...
+ <antrik> I wonder how other systems handle that
+ <LarstiQ> antrik: if there is a mismatch between pages and the underlying
+ store, like why changing small bits of data on an ssd is slow?
+ <braunr> mcsim1: i don't see why not
+ <braunr> antrik: netbsd reads a few pages before too
+ <braunr> actually, what netbsd does vary on the version, some only mapped
+ in resident pages, later versions started asynchronous transfers in the
+ hope those pages would be there
+ <antrik> LarstiQ: not sure what you are trying to say
+ <braunr> in linux :
+ <braunr> 321 * MADV_NORMAL - the default behavior is to read clusters.
+ This
+ <braunr> 322 * results in some read-ahead and read-behind.
+ <braunr> not sure if it's actually what the implementation does
+ <antrik> well, right -- it's probably always useful to read whole clusters
+ at a time, especially if they are the same size as pages... that doesn't
+ mean it always reads preceding pages; only if the read is in the middle
+ of the cluster AIUI
+ <LarstiQ> antrik: basically what braunr just pasted
+ <antrik> and in most cases, we will want to read some *following* clusters
+ as well, but probably not preceding ones
+ * LarstiQ nods
+ <braunr> antrik: the default policy is usually rather sequential
+ <braunr> here are the numbers for netbsd
+ <braunr> 166 static struct uvm_advice uvmadvice[] = {
+ <braunr> 167 { MADV_NORMAL, 3, 4 },
+ <braunr> 168 { MADV_RANDOM, 0, 0 },
+ <braunr> 169 { MADV_SEQUENTIAL, 8, 7},
+ <braunr> 170 };
+ <braunr> struct uvm_advice {
+ <braunr> int advice;
+ <braunr> int nback;
+ <braunr> int nforw;
+ <braunr> };
+ <braunr> surprising isn't it ?
+ <braunr> they may suggest sequential may be backwards too
+ <braunr> makes sense
+ <antrik> braunr: what are these numbers? pages?
+ <braunr> yes
+ <antrik> braunr: I suspect the idea behind SEQUENTIAL is that with typical
+ sequential access patterns, you will start at one end of the file, and
+ then go towards the other end -- so the extra clusters in the "wrong"
+ direction do not actually come into play
+ <antrik> only situation where some extra clusters are actually read is when
+ you start in the middle of a file, and thus do not know yet in which
+ direction the sequential read will go...
+ <braunr> yes, there are similar comments in the linux code
+ <braunr> mcsim1: so having before and after numbers seems both
+ straightforward and in par with other implementations
+ <antrik> I'm still surprised about the almost symmetrical policy for NORMAL
+ though
+ <antrik> BTW, is it common to use heuristics for automatically recognizing
+ random and sequential patterns in the absence of explicit madise?
+ <braunr> i don't know
+ <braunr> netbsd doesn't use any, linux seems to have different behaviours
+ for anonymous and file memory
+ <antrik> when KAM was working on this stuff, someone suggested that...
+ <braunr> there is a file_ra_state struct in linux, for per file read-ahead
+ policy
+ <braunr> now the structure is of course per file system, since they all use
+ the same address
+ <braunr> (which is why i wanted it to be per pager in the first place)
+ <antrik> mcsim1: as I said before, it might be useful for the pager to
+ supply cluster size, if it's different than page size. but right now I
+ don't think this is something worth bothering with...
+ <antrik> I seriously doubt it would be useful for the pager to supply any
+ other kind of policy
+ <antrik> braunr: I don't understand your remark about using the same
+ address...
+ <antrik> braunr: pre-mapping seems the obvious way to implement readahead
+ policy
+ <antrik> err... per-mapping
+ <braunr> the ra_state (read ahead state) isn't the policy
+ <braunr> the policy is per mapping, parts of the implementation of the
+ policy is per file system
+ <mcsim1> braunr: How do you look at following implementation of NORMAL
+ policy: We have fault page that is current. Than we have maximal size of
+ readahead block. First we find first absent pages before and after
+ current. Than we try to fit block that will be readahead into this
+ range. Here could be following situations: in range RBS/2 (RBS -- size of
+ readahead block) there is no any page, so readahead will be symmetric; if
+ current page is first absent page than all
+ <mcsim1> RBS block will consist of pages that are after current; on the
+ contrary if current page is last absent than readahead will go backwards.
+ <mcsim1> Additionally if current page is approximately in the middle of the
+ range we can decrease RBS, supposing that access is random.
+ <braunr> mcsim1: i think your gsoc project is about readahead, we're in
+ july, and you need to get the job done
+ <braunr> mcsim1: grab one policy that works, pages before and after are
+ good enough
+ <braunr> use sane default values, let the pagers decide if they want
+ something else
+ <braunr> and concentrate on the real work now
+ <antrik> braunr: I still don't see why pagers should mess with that... only
+ complicates matters IMHO
+ <braunr> antrik: probably, since they almost all use the default
+ implementation
+ <braunr> mcsim1: just use sane values inside the kernel :p
+ <braunr> this simplifies things by only adding the new vm_advise call and
+ not change the existing external pager interface
diff --git a/open_issues/performance/ipc_virtual_copy.mdwn b/open_issues/performance/ipc_virtual_copy.mdwn
new file mode 100644
index 00000000..9708ab96
--- /dev/null
+++ b/open_issues/performance/ipc_virtual_copy.mdwn
@@ -0,0 +1,395 @@
+[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+IRC, freenode, #hurd, 2011-09-02:
+
+ <slpz> what's the usual throughput for I/O operations (like "dd
+ if=/dev/zero of=/dev/null") in one of those Xen based Hurd machines
+ (*bber)?
+ <braunr> good question
+ <braunr> slpz: but don't use /dev/zero and /dev/null, as they don't have
+ anything to do with true I/O operations
+ <slpz> braunr: in fact, I want to test the performance of IPC's virtual
+ copy operations
+ <braunr> ok
+ <slpz> braunr: sorry, the "I/O" was misleading
+ <braunr> use bs=4096 then i guess
+ <slpz> bs > 2k
+ <braunr> ?
+ <slpz> braunr: everything about 2k is copied by vm_map_copyin/copyout
+ <slpz> s/about/above/
+ <slpz> braunr: MiG's stubs check for that value and generate complex (with
+ out_of_line memory) messages if datalen is above 2k, IIRC
+ <braunr> ok
+ <braunr> slpz: found it, thanks
+ <tschwinge> tschwinge@strauss:~ $ dd if=/dev/zero of=/dev/null bs=4k & p=$!
+ && sleep 10 && kill -s INFO $p && sleep 1 && kill $p
+ <tschwinge> [1] 13469
+ <tschwinge> 17091+0 records in
+ <tschwinge> 17090+0 records out
+ <tschwinge> 70000640 bytes (70 MB) copied, 17.1436 s, 4.1 MB/s
+ <tschwinge> Note, however 10 s vs. 17 s!
+ <tschwinge> And this is slow compared to heal hardware:
+ <tschwinge> thomas@coulomb:~ $ dd if=/dev/zero of=/dev/null bs=4k & p=$! &&
+ sleep 10 && kill -s INFO $p && sleep 1 && kill $p
+ <tschwinge> [1] 28290
+ <tschwinge> 93611+0 records in
+ <tschwinge> 93610+0 records out
+ <tschwinge> 383426560 bytes (383 MB) copied, 9.99 s, 38.4 MB/s
+ <braunr> tschwinge: is the first result on xen vm ?
+ <tschwinge> I think so.
+ <braunr> :/
+ <slpz> tschwinge: Thanks! Could you please try with a higher block size,
+ something like 128k or 256k?
+ <tschwinge> strauss is on a machine that also hosts a buildd, I think.
+ <braunr> oh ok
+ <pinotree> yes, aside either rossini or mozart
+ <tschwinge> And I can confirm that with dd if=/dev/zero of=/dev/null bs=4k
+ running, a parallel sleep 10 takes about 20 s (on strauss).
+
+[[open_issues/time]]
+
+ <braunr> slpz: i'll set up xen hosts soon and can try those tests while
+ nothing else runs to have more accurate results
+ <tschwinge> tschwinge@strauss:~ $ dd if=/dev/zero of=/dev/null bs=256k &
+ p=$! && sleep 10 && kill -s INFO $p && sleep 1 && kill $p
+ <tschwinge> [1] 13482
+ <tschwinge> 4566+0 records in
+ <tschwinge> 4565+0 records out
+ <tschwinge> 1196687360 bytes (1.2 GB) copied, 13.6751 s, 87.5 MB/s
+ <braunr> slpz: gains are logarithmic beyond the page size
+ <tschwinge> thomas@coulomb:~ $ dd if=/dev/zero of=/dev/null bs=256k & p=$!
+ && sleep 10 && kill -s INFO $p && sleep 1 && kill $p
+ <tschwinge> [1] 28295
+ <tschwinge> 6335+0 records in
+ <tschwinge> 6334+0 records out
+ <tschwinge> 1660420096 bytes (1.7 GB) copied, 9.99 s, 166 MB/s
+ <tschwinge> This time a the sleep 10 decided to take 13.6 s.
+ ``Interesting.''
+ <slpz> tschwinge: Thanks again. The results for the Xen machine are not bad
+ though. I can't obtain a throughput over 50MB/s with KVM.
+ <tschwinge> slpz: Want more data (bs)? Just tell.
+ <braunr> slpz: i easily get more than that
+ <braunr> slpz: what buffer size do you use ?
+ <slpz> tschwinge: no, I just wanted to see if Xen has an upper limit beyond
+ KVM's. Thank you.
+ <slpz> braunr: I try with different sizes until I find the maximum
+ throughput for a certain amount of requests (count)
+ <slpz> braunr: are you working with KVM?
+ <braunr> yes
+ <braunr> slpz: my processor is a model name : Intel(R) Core(TM)2 Duo
+ CPU E7500 @ 2.93GHz
+ <braunr> Linux silvermoon 2.6.32-5-amd64 #1 SMP Tue Jun 14 09:42:28 UTC
+ 2011 x86_64 GNU/Linux
+ <braunr> (standard amd64 squeeze kernel)
+ <slpz> braunr: and KVM's version?
+ <braunr> squeeze (0.12.5)
+ <braunr> bbl
+ <gnu_srs> 212467712 bytes (212 MB) copied, 9.95 s, 21.4 MB/s on kvm for me!
+ <slpz> gnu_srs: which block size?
+ <gnu_srs> 4k, and 61.7 MB/s with 256k
+ <slpz> gnu_srs: could you try with 512k and 1M?
+ <gnu_srs> 512k: 56.0 MB/s, 1024k: 40.2 MB/s Looks like the peak is around a
+ few 100k
+ <slpz> gnu_srs: thanks!
+ <slpz> I've just obtained 1.3GB/s with bs=512k on other (newer) machine
+ <braunr> on which hw/vm ?
+ <slpz> I knew this is a cpu-bound test, but I couldn't imagine faster
+ processors could make this difference
+ <slpz> braunr: Intel(R) Core(TM) i5 CPU 650 @ 3.20GHz
+ <slpz> braunr: KVM
+ <braunr> ok
+ <braunr> how much time did you wait before reading the result ?
+ <slpz> that was 20x times better than the same test on my Intel(R)
+ Core(TM)2 Duo CPU T7500 @ 2.20GHz
+ <slpz> braunr: I've repeated the test with a fixed "count"
+ <gnu_srs> My box is: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz: Max
+ is 67 MB/s around 140k block size
+ <braunr> yes but how much time did dd run ?
+ <gnu_srs> 10 s plus/minus a few fractions of a second,
+ <braunr> try waiting 30s
+ <slpz> braunr: didn't check, let me try again
+ <braunr> my kvm peaks at 130 MiB/s with bs 512k / 1M
+ <gnu_srs> 2029690880 bytes (2.0 GB) copied, 30.02 s, 67.6 MB/s, bs=140k
+ <braunr> gnu_srs: i'm very surprised with slpz's result of 1.3 GiB/s
+ <slpz> braunr: over 60 s running, same performance
+ <braunr> nice
+ <braunr> i wonder what makes it so fast
+ <braunr> how much cache ?
+ <gnu_srs> Me too, I cannot get better values than around 67 MB/s
+ <braunr> gnu_srs: same questions
+ <slpz> braunr: 4096KB, same as my laptop
+ <braunr> slpz: l2 ? l3 ?
+ <gnu_srs> kvm: cache=writeback, CPU: 4096 KB
+ <braunr> gnu_srs: this has nothing to do with the qemu option, it's about
+ the cpu
+ <slpz> braunr: no idea, it's the first time I touch this machine. I going
+ to see if I find the model in processorfinder
+ <braunr> under my host linux system, i get a similar plot, that is,
+ performance drops beyond bs=1M
+ <gnu_srs> braunr: OK, bu I gave you the cache size too, same as slpz.
+ <braunr> i wonder what dd actually does
+ <braunr> read() and writes i guess
+ <slpz> braunr: read/write repeatedly, nothing fancy
+ <braunr> slpz: i don't think it's a good test for virtual copy
+ <braunr> io_read_request, vm_deallocate, io_write_request, right
+ <braunr> slpz: i really wonder what it is about i5 that improves speed so
+ much
+ <slpz> braunr: me too
+ <slpz> braunr: L2: 2x256KB, L3: 4MB
+ <slpz> and something calling "SmartCache"
+ <gnu_srs> slpz: where did you find these values?
+ <slpz> gnu_srs: ark.intel.com and wikipedia
+ <gnu_srs> aha, cpuinfo just gives cache size.
+ <slpz> that "SmartCache" thing seems to be just L2 cache sharing between
+ cores. Shouldn't make a different since we're using only one core, and I
+ don't see KVM hooping between them.
+ <manuel> with bs=256k: 7004487680 bytes (7.0 GB) copied, 10 s, 700 MB/s
+ <manuel> (qemu/kvm, 3 * Intel(R) Xeon(R) E5504 2GHz, cache size 4096 KB)
+ <slpz> manuel: did you try with 512k/1M?
+ <manuel> bs=512k: 7730626560 bytes (7.7 GB) copied, 10 s, 773 MB/s
+ <manuel> bs=1M: 7896825856 bytes (7.9 GB) copied, 10 s, 790 MB/s
+ <slpz> manuel: those are pretty good numbers too
+ <braunr> xeon processor
+ <gnu_srs> lshw gave me: L1 Cache 256KiB, L2 cache 4MiB
+ <slpz> sincerely, I've never seen Hurd running this fast. Just checked
+ "uname -a" to make sure I didn't take the wrong image :-)
+ <manuel> for bs=256k, 60s: 40582250496 bytes (41 GB) copied, 60 s, 676 MB/s
+ <braunr> slpz: i think you can assume processor differences alter raw
+ copies too much to get any valuable results about virtual copy operations
+ <braunr> you need a specialized test program
+ <manuel> and bs=512k, 60s, 753 MB/s
+ <slpz> braunr: I'm using the mach_perf suite from OSFMach to do the
+ "serious" testing. I just wanted a non-synthetic test to confirm the
+ readings.
+
+[[!taglink open_issue_gnumach]] -- have a look at *mach_perf*.
+
+ <braunr> manuel: how much cache ? 2M ?
+ <braunr> slpz: ok
+ <braunr> manuel: hmno, more i guess
+ <manuel> braunr: /proc/cpuinfo says cache size : 4096 KB
+ <braunr> ok
+ <braunr> manuel: performance should drop beyond bs=2M
+ <braunr> but that's not relevant anyway
+ <gnu_srs> Linux: bs=1M, 10.8 GB/s
+ <slpz> I think this difference is too big to be only due to a bigger amount
+ of CPU cycles...
+ <braunr> slpz: clearly
+ <slpz> gnu_srs: your host system has 64 or 32 bits?
+ <slpz> braunr: I'm going to investigate a bit
+ <slpz> but this accidental discovery just made my day. We're able to run
+ Hurd at decent speeds on newer hardware!
+ <braunr> slpz: what result do you get with the same test on your host
+ system ?
+ <manuel> interestingly, running it several times has made the performance
+ drop quite much (i'm getting 400-500MB/s with 1M now, compared to nearly
+ 800 fifteen minutes ago)
+
+[[Degradataion]].
+
+ <slpz> braunr: probably an almost infinite throughput, but I don't consider
+ that a valid test, since in Linux, the write operation to "/dev/null"
+ doesn't involve memory copying/moving
+ <braunr> manuel: i observed the same behaviour
+ <gnu_srs> slpz: Host system is 64 bit
+ <braunr> slpz: it doesn't on the hurd either
+ <braunr> slpz: (under 2k, that is)
+ <braunr> over*
+ <slpz> braunr: humm, you're right, as the null translator doesn't "touch"
+ the memory, CoW rules apply
+ <braunr> slpz: the only thing which actually copies things around is dd
+ <braunr> probably by simply calling read()
+ <braunr> which gets its result from a VM copy operation, but copies the
+ content to the caller provided buffer
+ <braunr> then vm_deallocate() the data from the storeio (zero) translator
+ <braunr> if storeio isn't too dumb, it doesn't even touch the transfered
+ buffer (as anonymous vm_map()ped memory is already cleared)
+
+[[!taglink open_issue_documentation]]
+
+ <braunr> so this is a good test for measuring (profiling?) our ipc overhead
+ <braunr> and possibly the vm mapping operations (which could partly explain
+ why the results get worse over time)
+ <braunr> manuel: can you run vminfo | wc -l on your gnumach process ?
+ <slpz> braunr: Yes, unless some special situation apply, like the source
+ address/offset being unaligned, or if the translator decides to return
+ the result in a different buffer (which I assume is not the case for
+ storeio/zero)
+ <manuel> braunr: 35
+ <braunr> slpz: they can't be unaligned, the vm code asserts that
+ <braunr> manuel: ok, this is normal
+ <slpz> braunr: address/offset from read()
+ <braunr> slpz: the caller provided buffer you mean ?
+ <slpz> braunr: yes, and the offset of the memory_object, if it's a pager
+ based translator
+ <braunr> slpz: highly unlikely, the compiler chooses appropriate alignments
+ for such buffers
+ <slpz> braunr: in those cases, memcpy is used over vm_copy
+ <braunr> slpz: and the glibc memcpy() optimized versions can usually deal
+ with that
+ <braunr> slpz: i don't get your point about memory objects
+ <braunr> slpz: requests on memory objects always have aligned values too
+ <slpz> braunr: sure, but can't deal with the user requesting non
+ page-aligned sizes
+ <braunr> slpz: we're considering our dd tests, for which we made sure sizes
+ were page aligned
+ <slpz> braunr: oh, I was talking in a general sense, not just in this dd
+ tests, sorry
+ <slpz> by the way, dd on the host tops at 12 GB/s with bs=2M
+ <braunr> that's consistent with our other results
+ <braunr> slpz: you mean, even on your i5 processor with 1.3 GiB/s on your
+ hurd kvm ?
+ <slpz> braunr: yes, on the GNU/Linux which is running as host
+ <braunr> slpz: well that's not consistent
+ <slpz> braunr: consistent with what?
+ <braunr> slpz: i get roughly the same result on my host, but ten times less
+ on my hurd kvm
+ <braunr> slpz: what's your kernel/kvm versions ?
+ <slpz> 2.6.32-5-amd64 (debian's build) 0.12.5
+ <braunr> same here
+ <braunr> i'm a bit clueless
+ <braunr> why do i only get 130 MiB/s where you get 1.3 .. ? :)
+ <slpz> well, on my laptop, where Hurd on KVM tops on 50 MB/s, Linux gets a
+ bit more than 10 GB/s
+ <braunr> see
+ <braunr> slpz: reduce bs to 256k and test again if you have time please
+ <slpz> braunr: on which system?
+ <braunr> slpz: the fast one
+ <braunr> (linux host)
+ <slpz> braunr: Hurd?
+ <slpz> ok
+ <slpz> 12 GB/s
+ <braunr> i get 13.3
+ <slpz> same for 128k, only at 64k starts dropping
+ <slpz> maybe, on linux we're being limited by memory speed, while on Hurd's
+ this test is (much) more CPU-bound?
+ <braunr> slpz: maybe
+ <braunr> too bad processor stalls aren't easy to measure
+ <slpz> braunr: that's very true. It's funny when you read a paper which
+ measures performance by cycles on an old RISC processor. That's almost
+ impossible to do (with reliability) nowadays :-/
+ <slpz> I wonder which throughput can achieve Hurd running bare-metal on
+ this machine...
+ <antrik> both the Xeon and the i5 use cores based on the Nehalem
+ architecture
+ <antrik> apparently Nehalem is where Intel first introduces nested page
+ tables
+ <antrik> which pretty much explains the considerably lower overhead of VM
+ magic
+ <cjuner> antrik, what are nested page tables? (sounds like the 4-level page
+ tables we already have on amd64, or 2-level or 3-level on x86 pae)
+ <antrik> page tables were always 2-level on x86
+ <antrik> that's unrelated
+ <antrik> nested page tables means there is another layer of address
+ translation, so the VMM can do it's own translation and doesn't care what
+ the guest system does => no longer has to intercept all page table
+ manipulations
+ <braunr> antrik: do you imply it only applies to virtualized systems ?
+ <antrik> braunr: yes
+ <slpz> antrik: Good guess. Looks like Intel's EPT are doing the trick by
+ allowing the guest OS deal with its own page faults
+ <slpz> antrik: next monday, I'll try disabling EPT support in KVM on that
+ machine (the fast one). That should confirm your theory empirically.
+ <slpz> this also means that there're too many page faults, as we should be
+ doing virtual copies of memory that is not being accessed
+ <slpz> and looking at how the value of "page faults" in "vmstat" increases,
+ shows that page faults are directly proportional to the number of pages
+ we are asking from the translator
+ <slpz> I've also tried doing a long read() directly, to be sure that "dd"
+ is not doing something weird, and it shows the same behaviour.
+ <braunr> slpz: dd does copy buffers
+ <braunr> slpz: i told you, it's not a good test case for pure virtual copy
+ evaluation
+ <braunr> antrik: do you know if xen benefits from nested page tables ?
+ <antrik> no idea
+
+[[!taglink open_issue_xen]]
+
+ <slpz> braunr: but my small program doesn't, and still provokes a lot of
+ page faults
+ <braunr> slpz: are you certain it doesn't ?
+ <slpz> braunr: looking at google, it looks like recent Xen > 3.4 supports
+ EPT
+ <braunr> ok
+ <braunr> i'm ordering my new server right now, core i5 :)
+ <slpz> braunr: at least not explicitily. I need to look at MiG stubs again,
+ I don't remember if they do something weird.
+ <antrik> braunr: sandybridge or nehalem? :-)
+ <braunr> antrik: no idea
+ <antrik> does it tell a model number?
+ <braunr> not yet
+ <braunr> but i don't have a choice for that, so i'll order it first, check
+ after
+ <antrik> hehe
+ <antrik> I'm not sure it makes all that much difference anyways for a
+ server... unless you are running it at 100% load ;-)
+ <braunr> antrik: i'm planning on running xen guests suchs as new buildd
+ <antrik> hm... note though that some of the nehalem-generation i5s were
+ dual-core, while all the new ones are quad
+ <braunr> it's a quad
+ <antrik> the newer generation has better performance per GHz and per
+ Watt... but considering that we are rather I/O-limited in most cases, it
+ probably won't make much difference
+ <antrik> not sure whether there are further virtualisation improvements
+ that could be relevant...
+ <braunr> buildds spend much time running gcc, so even such improvements
+ should help
+ <braunr> there, server ordered :)
+ <braunr> antrik: model name : Intel(R) Core(TM) i5-2400 CPU @ 3.10GHz
+
+IRC, freenode, #hurd, 2011-09-06:
+
+ <slpz> youpi: what machines are being used for buildd? Do you know if they
+ have EPT/RVI?
+ <youpi> we use PV Xen there
+ <slpz> I think Xen could also take advantage of those technologies. Not
+ sure if only in HVM or with PV too.
+ <youpi> only in HVM
+ <youpi> in PV it does not make sense: the guest already provides the
+ translated page table
+ <youpi> which is just faster than anything else
+
+IRC, freenode, #hurd, 2011-09-09:
+
+ <antrik> oh BTW, for another data point: dd zero->null gets around 225 MB/s
+ on my lowly 1 GHz Pentium3, with a blocksize of 32k
+ <antrik> (but only half of that with 256k blocksize, and even less with 1M)
+ <antrik> the system has been up for a while... don't know whether it's
+ faster on a freshly booted one
+
+IRC, freenode, #hurd, 2011-09-15:
+
+ <sudoman>
+ http://www.reddit.com/r/gnu/comments/k68mb/how_intelamd_inadvertently_fixed_gnu_hurd/
+ <sudoman> so is the dd command pointed to by that article a measure of io
+ performance?
+ <antrik> sudoman: no, not really
+ <antrik> it's basically the baseline of what is possible -- but the actual
+ slowness we experience is more due to very unoptimal disk access patterns
+ <antrik> though using KVM with writeback caching does actually help with
+ that...
+ <antrik> also note that the title of this post really makes no
+ sense... nested page tables should provide similar improvements for *any*
+ guest system doing VM manipulation -- it's not Hurd-specific at all
+ <sudoman> ok, that makes sense. thanks :)
+
+IRC, freenode, #hurd, 2011-09-16:
+
+ <slpz> antrik: I wrote that article (the one about How AMD/Intel fixed...)
+ <slpz> antrik: It's obviously a bit of an exaggeration, but it's true that
+ nested pages supposes a great improvement in the performance of Hurd
+ running on virtual machines
+ <slpz> antrik: and it's Hurd specific, as this system is more affected by
+ the cost of page faults
+ <slpz> antrik: and as the impact of virtualization on the performance is
+ much higher than (almost) any other OS.
+ <slpz> antrik: also, dd from /dev/zero to /dev/null it's a measure on how
+ fast OOL IPC is.
diff --git a/open_issues/performance/microbenchmarks.mdwn b/open_issues/performance/microbenchmarks.mdwn
new file mode 100644
index 00000000..de3a54b7
--- /dev/null
+++ b/open_issues/performance/microbenchmarks.mdwn
@@ -0,0 +1,13 @@
+[[!meta copyright="Copyright © 2010 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+Microbenchmarks may give useful hints, or they may not.
+
+<http://www.ibm.com/developerworks/java/library/j-jtp02225.html>
diff --git a/open_issues/performance/microkernel_multi-server.mdwn b/open_issues/performance/microkernel_multi-server.mdwn
new file mode 100644
index 00000000..111d2b88
--- /dev/null
+++ b/open_issues/performance/microkernel_multi-server.mdwn
@@ -0,0 +1,47 @@
+[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+[[!tag open_issue_documentation]]
+
+Performance issues due to the microkernel/multi-server system architecture?
+
+IRC, freenode, #hurd, 2011-07-26
+
+ < CTKArcher> I read that, because of its microkernel+servers design, the
+ hurd was slower than a monolithic kernel, is that confirmed ?
+ < youpi> the hurd is currently slower than current monolithic kernels, but
+ it's not due to the microkernel + servers design
+ < youpi> the microkernel+servers design makes the system call path longer
+ < youpi> but you're bound by disk and network speed
+ < youpi> so the extra overhead will not hurt so much
+ < youpi> except dumb applications keeping doing system calls all the time
+ of course, but they are usually considered bogus
+ < braunr> there may be some patterns (like applications using pipes
+ extensively, e.g. git-svn) which may suffer from the design, but still in
+ an acceptable range
+ < CTKArcher> so, you are saying that disk and network are more slowing the
+ system than the longer system call path and because of that, it wont
+ really matter ?
+ < youpi> braunr: they should sitll be fixed because they'll suffer (even if
+ less) on monolithic kernels
+ < youpi> CTKArcher: yes
+ < braunr> yes
+ < CTKArcher> mmh
+ < youpi> CTKArcher: you might want to listen to AST's talk at fosdem 10
+ iirc, about minix
+ < youpi> they even go as far as using an IPC for each low-level in/out
+ < youpi> for security
+ < braunr> this has been expected for a long time
+ < braunr> which is what motivated research in microkernels
+ < CTKArcher> I've already downloaded the video :)
+ < youpi> and it has been more and more true with faster and faster cpus
+ < braunr> but in 95, processors weren't that fast compared to other
+ components as they are now
+ < youpi> while disk/mem haven't evovled so fast