I am enjoying certain insights on the fascinating topic of Private Information Retrieval (PIR) in this 2014 paper (XPIR):

eprint.iacr.org/2014/1025.pdf

My favorite quote so far:

"The (broken) schemes of Aguilar et al. [7] and Trostle and Parrish [8] represent the state-of-the-art in efficient private information retrieval.." 😂

Jokes aside, there is a very interesting example of a pattern explained in the early part of this paper.

Cryptography researchers are often coming up with stupidly powerful constructs, but sometimes forget the cost of what they're playing with on paper: dealing with monstrously huge numbers matters, a lot.

In this specific case, to preserve user privacy in accessing a database there is a trivial "base-case": download the whole database. This is trivially information-theoretically secure.
(2/n)

Follow

It turns out that in practice, a system for PIR that on paper reduces client-server communication from this base case, is prohibitively expensive time-wise for the server itself, negating any value in the sophisticated crypto protocol.

What's perhaps more interesting, to me: a key part of this argument seems to be that in order for the data retrieval to be private, it *must* be the case that the server *processes every record in the database*, in answering that particular query.

(3/n)

· · Web · 1 · 0 · 1

Tbh I'm not convinced of that statement. I think it was also made in this paper, though I forget which section: web.cs.ucla.edu/~rafail/PUBLIC

These papers are about "cPIR", computational, rather than perfect/information-theoretic privacy. It turns out that that relaxation of security is required in order to get PIR for *one* server rather than multiple, which is ... somehow obvious(?)

(4/4)

Sign in to participate in the conversation
unidentified instance

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!