I am enjoying certain insights on the fascinating topic of Private Information Retrieval (PIR) in this 2014 paper (XPIR):
My favorite quote so far:
"The (broken) schemes of Aguilar et al.  and Trostle and Parrish  represent the state-of-the-art in efficient private information retrieval.." 😂
Jokes aside, there is a very interesting example of a pattern explained in the early part of this paper.
Cryptography researchers are often coming up with stupidly powerful constructs, but sometimes forget the cost of what they're playing with on paper: dealing with monstrously huge numbers matters, a lot.
In this specific case, to preserve user privacy in accessing a database there is a trivial "base-case": download the whole database. This is trivially information-theoretically secure.
It turns out that in practice, a system for PIR that on paper reduces client-server communication from this base case, is prohibitively expensive time-wise for the server itself, negating any value in the sophisticated crypto protocol.
What's perhaps more interesting, to me: a key part of this argument seems to be that in order for the data retrieval to be private, it *must* be the case that the server *processes every record in the database*, in answering that particular query.
Tbh I'm not convinced of that statement. I think it was also made in this paper, though I forget which section: http://web.cs.ucla.edu/~rafail/PUBLIC/34.pdf
These papers are about "cPIR", computational, rather than perfect/information-theoretic privacy. It turns out that that relaxation of security is required in order to get PIR for *one* server rather than multiple, which is ... somehow obvious(?)
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!