phyzome 2 days ago

I found less-than-great results in a simulation where there's a slight persistent difference between two of the options: https://www.brainonfire.net/blog/2019/07/21/load-balancing-b... (as part of a larger study on healthchecks that Don't Suck).

  • yuliyp 2 days ago

    I think that simulation claim that pick-2 can send 2.5x as much traffic to most loaded vs least loaded is a bit misleading: if the load metric is completely random then that might happen. The more correlation to load the better. Also, rather than looking at the ratio of most loaded to least loaded, it might be better to look at the ratio of most loaded to average: that is, how much extra work did we send to a poor server. In that, pick-2 has an absolute cap of 2xing the load on a server.

  • miggy 2 days ago

    Excellent read. It highlights key aspects like health checks, server restarts, warm up, and load shedding, all of which make load balancing an already hard problem even harder.

reilly3000 2 days ago

Incidentally this makes me think about how little I’ve needed to think about load balancing for a long time. It’s one of those cloud primitives that make sense as a default for most use cases and just works.

atorodius 2 days ago

neat trick indeed. would be cool to do the math and get an analytical formula of mean queue time given cache refresh for a given k, under some mild assumptions.

  • akoboldfrying 2 days ago

    That may have been done in the underlying paper by Mitzenmacher et al., but I haven't checked.

    I'm more confident that that paper established that firing n requests at n servers will result in a max server load proportional to log(log(n)) with high probability, vs. proportional to log(n) for random -- IOW an exponential improvement in max server load over random.