Proof of hypothetical syllogism by constructive dilemma[edit]
/ / Lemma: Logical equivalences involving conditional statements B
/ / Lemma: Identity laws A
/ / Lemma: Negation laws A
/ / Lemma: Constructive dilemma
/ / Lemma: Logical equivalences involving conditional statements A
.1 / / premise
.11 / .1 / Logical equivalences involving conditional statements B
.12 / .11
.13 / .1 .12
.14 / .13 / Identity laws A
.15 / .14
.16 / .15
.17 / .13 .16
.18 / .17
.19 / .18 / Negation laws A
.2 / .19
.21 / .18 .2
.22 / .21 / Constructive dilemma
.23 / .22
.24 / .23
.25 / .24 / Logical equivalences involving conditional statements A
.26 / .25
.27 / .24 .26
.28 / .2 .27
.29 / .28
.3 / .16 .29
.31 / .12 .3 / conclusion
Getting rid of the daemon & speeding up D-Bus without hacking the kernel?[edit]
Someone pointed out that Dbus is very slow in an e-mail of DBus's mailing list. According to Speeding up D-Bus [LWN.net], the performance issue again has something to do with the "daemon" or "server process", just like the issue X Window is facing. The problem, as my understanding, is DBus daemon and X Server get involved too much so that frequent IPC between server and clients slows down the whole subsystems. Offloading server's job to the clients should somewhat improve the situation.
Diagram 1 demonstrates a DBus flow where Process A sends a message to Process B. (I know very little about DBus, all diagrams here are just to demonstrate the idea only) When message sent by Process A arrives Process B, it calls its message handler to handle the message. Because each message must pass through the DBus daemon, a message sent by a client process must undergo at least two context switches to be propagated to the client process on the other side.
The Message handler in Diagram 1 may look like this:
void messageHandler(DBusMessage & dBusMessage)
{
if (dBusMessage == FOO_MESSAGE) {
... do something for FOO_MESSAGE ...
} else if (dBusMessage == BAR_MESSAGE) {
... do something for BAR_MESSAGE ...
} else if (dBusMessage == ...) {
...
}
}
Message Handler is executed in the context of Process B. Normally, the handler need to access variables in address space of Process B in order to do something useful.
The flow in Diagram 1 shows the possible high latency problem (made by context switches) currently DBus has. So I came up with a simple idea to change the design like this:
Mechanism of shared memory is the foundation in this design. The message handler code and all variables it would access are shared by the message sender process (Process A) and the message receiver process (Process B) (as shown in Diagram 2 where address space of Process A and Process B overlap on the Message handler part). The role of DBus daemon is no longer be a message relay or router. Instead, the daemon is more like a telephone operator whose job is to bind the receiver's message handler to the sender so that when the sender wants to send a message to the receiver, it actually calls the message handler of the receiver directly. Similar to the original design, the message is passed as an argument to the message handler messageHandler(DBusMessage & dBusMessage), but the handler is now called by the message sender rather than by the message receiver, which implies the handler is executed in the context of the sender rather than executed in the context of the receiver. The outcome of the design is that the two context switches in the original design are completely eliminated (the latency is gone) and no kernel hacking is required (required by the solution mentioned in article "Speeding up D-Bus [LWN.net]" above) ... although synchronization would be needed because sender and receiver may share some part of variables.
So I would like to know how do you think about the new design? And is it feasible? Is it really beneficial? Or my description is unclear? --
how to make synchronization of receiver variable transparent to programmer is the challenge, choices may be:
page fault or SIGSEGV (enforced by mprotect()) to protect shared variables transparently
ptrace to set data breakpoint / ptrace to instrument code of client process
call mmap() with MAP_SHARED when client process calls library functions to share out its message handler (alt: ptrace to force client process to call mmap() with MAP_SHARED)
change DBus interface where programmer must use APIs with sync. to access shared varibles
suggest people stop using DBus (inadequate, unfriendly interface & undemocratic policy) and switch to other existing IPC comparable with DBus yet adquate (can do as good as my suggested new design)
so many people in need for speed, but DBus can't satisfy them and no existing candidate to use. So we should create one for ourself!
link (bind) from message sender to message receiver's event handler dynamically, sender calls receiver's handler directly instead of sending message to receiver
multicast
maintain shared memory maps? or open mmap() file descriptors? shared mem mapping still in effect after fd is closed?
Is it possible to transfer the handler code (and accessed variables) from Process B to A such that Process A may call the handler directly?
(it's undeniable (isn't it?) there are so many applications depend on such client-server design, e.g. httpd, sshd, ...)
what's the difference between KDBus & Alberto Mardegan's [1][2]?
Alberto Mardegan's won't results N^2 connections?
DBus spec. says its low-latency, misleading?
Don't be afraid to rewrite existing code, otherwise we can only achieve suboptimal. I would rather create my own IPC instead of using DBus