Talk:Rendezvous hashing

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Comparison With Consistent Hashing[edit]

Consistent Hashing, in standard practices, avoids the skewed workload problem by placing replicated (usually ~100) points on the circle, evenly redistributing failed traffic to remaining ones. I do not have understanding of Rendezvous hashing but if this is the only difference this paragraph tells, this should be deleted or at least mention this technique. — Preceding unsigned comment added by 119.70.64.193 (talk) 08:44, 11 December 2013‎

In Consistent Hashing, it's trivial to control the balance of traffic, if the servers' capacities differ. For example, if one server has twice the capacity of another, simply add twice as many points for the first server. Is there a variant of Rendezvous hashing that accomplishes this? — Preceding unsigned comment added by 17.149.233.197 (talk) 18:36, 4 November 2015 (UTC)[reply]

Logarithmic implementation[edit]

While it might first appear that the HRW algorithm runs in O(n) time, this is not the case. The sites can be organized hierarchically, and HRW applied at each level as one descends the hierarchy, leading to O(log n) running time, as in.[1]

  1. ^ Wang, Wei; Chinya Ravishankar (January 2009). "Hash-Based Virtual Hierarchies for Scalable Location Service in Mobile Ad-hoc Networks". Mobile Networks and Applications. 14 (5): 625–637. doi:10.1007/s11036-008-0144-3.

I got a copy of the referenced paper, and the relevant section seems to be 3.1.1 on page 628. They have what I'd call a "recursive grid" and they're using HRW to choose the "prime node" for each smaller grid square within the larger grid square. But it really isn't at all clear to me how to relate this to a general HRW implementation. The article seems to imply that there is a way to do it, as it doesn't qualify the statement with something like "in some problem domains."

I put some thought into it, and it seems tricky, because any sort of way you'd define an object-specific tree (without walking every node, and being back to O(n)) leaves you vulnerable to one node leaving rearranging a bunch of other nodes, which seems to defeat the main purpose of HRW (efficient remapping). You can make an object-independent tree, and reuse it for different objects, but then this is really Consistent hashing, not HRW at all.

To state my concern a different way: it is not clear how to "organize hierarchically" in less than O(n) time, in a way that doesn't defeat efficient remapping.

Can someone elaborate on how this might work?

AaronWL (talk) 05:15, 14 December 2014 (UTC)[reply]

There's also a question on Stackoverflow on the same point.

So far, there's been one proposal about how to do this, but my concern is that it's basically consistent hashing, and seems like an adjunct of HRW, instead of a core part of HRW itself. Aaron Will (talk) 09:57, 14 December 2014 (UTC)[reply]


I think there are two different things being conflated:

  • (a) A virtual hierarchical structure (called a skeleton)
  • (b) When there are large numbers of catchers C, is there a way for everyone to agree on the "right" catcher for some file with filename F in less time than O(C)?

Currently the article implies that (b) is a real problem, so people developed (a) in order to solve (b). But if I'm reading Mayank, Phatak, Ravishankar correctly, I get the impression that they think the time it takes a local machine to generate a hash for all C catchers in the entire system is negligible compared to the latency of waiting for (part of) a file from a catcher, and so (b) isn't a real problem.

Is the (local) running time to do rendezvous hashing ever a real problem?

The "skeleton" was developed for entirely different reasons -- my understanding is that it was developed to handle the case where you have something like 100 catchers in each datacenter, around 3 datacenters in each city, and around 100 cities with a datacenter, and you want to give very short latencies for the first part of every file no matter which datacenter a user is closest to, by setting things up so the user can pull the first part of the file from some catcher in the geographically closest datacenter, but you don't want to store the *entire* file in *every* datacenter (or even in every city). (I've been calling each computer used to cache files a "catcher"; is there a better name?).

Where does "O(log C) running time" come from? Do Mayank, Phatak, Ravishankar ever mention (b) or "O(log C)" ? Or any other references ? If none of our references ever mention (b) or "O(log C)" or "O(C)" or "O(1)", then maybe we should delete the stuff that talks about (b) and "O(log C)" -- but because Mayank, Phatak, Ravishankar seem to think the "skeleton" is important, we should summarize what they (and other references) say about it.

Even if it turns out that it is possible to do rendezvous hashing in O(log C) time, I think that should be in a completely separate section of this article, because it is misleading to use Mayank, Phatak, Ravishankar as a reference for stuff I can't find anywhere in their paper. --DavidCary (talk) 01:15, 4 June 2015 (UTC)[reply]

Skeleton-based variant[edit]

Shouldn't the part that talks about octal numbers say ternary numbers? The digits are 0, 1 and 2. — Preceding unsigned comment added by 159.153.148.40 (talk) 14:15, 28 June 2021 (UTC)[reply]