This was a pleasant and well-written paper, but it suffers a bit from being neither-this-nor-that, offering a presentation about algorithms that are relatively basic, but then not having enough space left to do a really deep investigation of the implementation and tradeoffs. For monotonicity I understand how each point enforces a set of polar angles. However, I don't fully see how these different constraints can be combined efficiently. Essentially it seems you are trying to intersect a bunch of circles, and the resulting intersection shape could be quite complex-it seems like a convex hull kind of problem? And this issue, of intersecting (a representation of) two sets of directions, appears to encompass all the challenge of monotonicity. For example, if I have a constant-time routing for that, then I can use the usual divide and conquer technique that we used for orthogonal range searching, where I create a full binary tree over the points and compute the valid directions for each subtree, since any interval is then a union of O(log n) subtrees. You clearly invested a lot of effort in your implementation. Unfortunately, most of this effortdid not specifically advance understanding of the tradeoffs between these algorithms. And again, from an implementation perspective, it appears that a substantial portion of your work came from implementing the polyarc intersection code. This is essentially a black box that is used by some of the algorithms; it is also particular to a single problem. Thus, it doesn't really inform us much about the general approaches (range trees versus rectangles). As a contrasting example, note that all the algorithms tend to rely on some common "primitives" such as Tgreedy. Instead of doing a full implementation and measuring clock time, you could have done an easier implementation in a higher level language which would not have offered accurate timing info, but then used your implementation to determine the number of calls different algorithms made to the primitives. This would give you the same ability to compare different algorithms with much less implementation effort. Your test data was interesting; good job coming up with some realistic input cases. But you weren't particularly exhaustive in testing with it. You showed how the different algorithms depended on the number of ihput points. But you also made the interesting observation of something that may matter more, which is the "degree of rectangleness" if the input, e.g. how long the monotonic sequences are. As with input size, this is something you could have varied to understand the exact dependence (and possibly fit constants to some runtime formulas). This also raises a natural question: is there a way to have the algorithm adapt to the input, so it is as fast as the best algorithm either way? Can it estimate the sizes of rectangles in advance and use that as a guide to decide which algorithm to use? You also didn't really unpack where the runtime of the algorithms was going. Is all the work in the evaluation of the primitives, or do the more sophisticated algorithms spend a bunch of time on the overhead of maintaining their data structures? You give numerical values for runtimes at different points. With this data, it ought to be possible to fit a runtime curve that tells you the runtime at arbitrary points, i.e. to compute the constants associated with the asymptotic time bounds. Grade: B+