Guruswami, Venkatesan, Christopher Umans, and Salil Vadhan. “
Unbalanced expanders and randomness extractors from Parvaresh–Vardy codes.”
Journal of the ACM 56, no. 4 (2009): 1–34.
Publisher's VersionAbstract
Version History: Preliminary versions of this article appeared as Technical Report TR06134 in Electronic Colloquium on Computational Complexity, 2006, and in Proceedings of the 22nd Annual IEEE Conference on Computional Complexity (CCC '07), pp. 96–108. Preliminary version recipient of Best Paper Award at CCC '07.
We give an improved explicit construction of highly unbalanced bipartite expander graphs with expansion arbitrarily close to the degree (which is polylogarithmic in the number of vertices). Both the degree and the number of righthand vertices are polynomially close to optimal, whereas the previous constructions of TaShma et al. [2007] required at least one of these to be quasipolynomial in the optimal. Our expanders have a short and selfcontained description and analysis, based on the ideas underlying the recent listdecodable errorcorrecting codes of Parvaresh and Vardy [2005].
Our expanders can be interpreted as nearoptimal “randomness condensers,” that reduce the task of extracting randomness from sources of arbitrary minentropy rate to extracting randomness from sources of minentropy rate arbitrarily close to 1, which is a much easier task. Using this connection, we obtain a new, selfcontained construction of randomness extractors that is optimal up to constant factors, while being much simpler than the previous construction of Lu et al. [2003] and improving upon it when the error parameter is small (e.g., 1/poly(n)).
JACM2009.pdf CCC2007.pdf ECCC2006.pdf ECCC  Rev. 2008.pdf Haitner, Iftach, Minh Nguyen, Shien Jin Ong, Omer Reingold, and Salil Vadhan. “
Statistically hiding commitments and statistical zeroknowledge arguments from any oneway function.”
SIAM Journal on Computing 39, no. 3 (2009): 11531218.
Publisher's VersionAbstract
Version History: Special Issue on STOC ‘07. Merge of papers from FOCS ‘06 and STOC ‘07. Received SIAM Outstanding Paper Prize 2011.
We give a construction of statistically hiding commitment schemes (those in which the hiding property holds against even computationally unbounded adversaries) under the minimal complexity assumption that oneway functions exist. Consequently, oneway functions suffice to give statistical zeroknowledge arguments for any NP statement (whereby even a computationally unbounded adversarial verifier learns nothing other than the fact that the assertion being proven is true, and no polynomialtime adversarial prover can convince the verifier of a false statement). These results resolve an open question posed by Naor et al. [J. Cryptology, 11 (1998), pp. 87–108].
SIAM2009.pdf Ong, Shien Jin, David Parkes, Alon Rosen, and Salil Vadhan. “
Fairness with an honest minority and a rational majority.” In
O. Reingold, editor, Proceedings of the Fourth Theory of Cryptography Conference (TCC ‘09), Lecture Notes in Computer Science, 5444:3653. SpringerVerlag, 2009.
Publisher's VersionAbstract
Version History: Preliminary version posted as Cryptology ePrint Archive Report 2008/097, March 2008.
We provide a simple protocol for secret reconstruction in any threshold secret sharing scheme, and prove that it is fair when executed with many rational parties together with a small minority of honest parties. That is, all parties will learn the secret with high probability when the honest parties follow the protocol and the rational parties act in their own selfinterest (as captured by a setNash analogue of trembling hand perfect equilibrium). The protocol only requires a standard (synchronous) broadcast channel, tolerates both early stopping and incorrectly computed messages, and only requires 2 rounds of communication.
Previous protocols for this problem in the cryptographic or economic models have either required an honest majority, used strong communication channels that enable simultaneous exchange of information, or settled for approximate notions of security/equilibria. They all also required a nonconstant number of rounds of communication.
TCC2009.pdf ePrint2008.pdf Dodis, Yevgeniy, Salil Vadhan, and Daniel Wichs. “
Proofs of retrievability via hardness amplification.” In
O. Reingold, editor, Proceedings of the Fourth Theory of Cryptography Conference (TCC ‘09), Lecture Notes in Computer Science, 5444:109127. SpringerVerlag, 2009.
Publisher's VersionAbstract
Version History: Originally presented at Theory of Cryptography Conference (TCC) 2009. Full version published in Cryptology ePrint Archive (attached as ePrint2009).
Proofs of Retrievability (PoR), introduced by Juels and Kaliski [JK07], allow the client to store a file F on an untrusted server, and later run an efficient audit protocol in which the server proves that it (still) possesses the client’s data. Constructions of PoR schemes attempt to minimize the client and server storage, the communication complexity of an audit, and even the number of fileblocks accessed by the server during the audit. In this work, we identify several different variants of the problem (such as boundeduse vs. unboundeduse, knowledgesoundness vs. informationsoundness), and giving nearly optimal PoR schemes for each of these variants. Our constructions either improve (and generalize) the prior PoR constructions, or give the first known PoR schemes with the required properties. In particular, we

Formally prove the security of an (optimized) variant of the boundeduse scheme of Juels and Kaliski [JK07], without making any simplifying assumptions on the behavior of the adversary.

Build the first unboundeduse PoR scheme where the communication complexity is linear in the security parameter and which does not rely on Random Oracles, resolving an open question of Shacham and Waters [SW08].

Build the first boundeduse scheme with informationtheoretic security.
The main insight of our work comes from a simple connection between PoR schemes and the notion of hardness amplification, extensively studied in complexity theory. In particular, our im provements come from first abstracting a purely informationtheoretic notion of PoR codes, and then building nearly optimal PoR codes using stateoftheart tools from coding and complexity theory.
ePrint2009.pdf TCC2009.pdf Dwork, Cynthia, Moni Naor, Omer Reingold, Guy Rothblum, and Salil Vadhan. “
On the complexity of differentially private data release: Efficient algorithms and hardness results.” In
Proceedings of the 41st Annual ACM Symposium on Theory of Computing (STOC ‘09), 381390. ACM, 2009.
Publisher's VersionAbstractWe consider private data analysis in the setting in which a trusted and trustworthy curator, having obtained a large data set containing private information, releases to the public a "sanitization" of the data set that simultaneously protects the privacy of the individual contributors of data and offers utility to the data analyst. The sanitization may be in the form of an arbitrary data structure, accompanied by a computational procedure for determining approximate answers to queries on the original data set, or it may be a "synthetic data set" consisting of data items drawn from the same universe as items in the original data set; queries are carried out as if the synthetic data set were the actual input. In either case the process is noninteractive; once the sanitization has been released the original data and the curator play no further role. For the task of sanitizing with a synthetic dataset output, we map the boundary between computational feasibility and infeasibility with respect to a variety of utility measures. For the (potentially easier) task of sanitizing with unrestricted output format, we show a tight qualitative and quantitative connection between hardness of sanitizing and the existence of traitor tracing schemes.
STOC2009.pdf Haitner, Iftach, Omer Reingold, Salil Vadhan, and Hoeteck Wee. “
Inaccessible entropy.” In
Proceedings of the 41st Annual ACM Symposium on Theory of Computing (STOC ‘09), 611620. ACM, 2009.
Publisher's VersionAbstract
We put forth a new computational notion of entropy, which measures the (in)feasibility of sampling high entropy strings that are consistent with a given protocol. Specifically, we say that the \(i\)’th round of a protocol \((\mathsf{A,B})\) has accessible entropy at most \(k\), if no polynomialtime strategy \(\mathsf{A}^*\) can generate messages for \(\mathsf{A}\) such that the entropy of its message in the \(i\)’th round has entropy greater than \(k\) when conditioned both on prior messages of the protocol and on prior coin tosses of \(\mathsf{A}^*\). We say that the protocol has inaccessible entropy if the total accessible entropy (summed over the rounds) is noticeably smaller than the real entropy of \(\mathsf{A}\)’s messages, conditioned only on prior messages (but not the coin tosses of \(\mathsf{A}\)). As applications of this notion, we

Give a much simpler and more efficient construction of statistically hiding commitment schemes from arbitrary one way functions.

Prove that constantround statistically hiding commitments are necessary for constructing constantround zeroknowledge proof systems for NP that remain secure under parallel composition (assuming the existence of oneway functions).
STOC2009.pdf Trevisan, Luca, Madhur Tulsiani, and Salil Vadhan. “
Regularity, boosting, and efficiently simulating every highentropy distribution.” In
Proceedings of the 24th Annual IEEE Conference on Computational Complexity (CCC ‘09), 126136. IEEE, 2009.
Publisher's VersionAbstract
Version History: Preliminary version posted as ECCC TR08103.
We show that every bounded function \({g} \) : \(\{0,1\}^n \to [0,1]\) admits an efficiently computable "simulator" function \({h} \) : \(\{0,1\}^n \to [0,1]\) such that every fixed polynomial size circuit has approximately the same correlation with \({g}\) as with \({h}\). If g describes (up to scaling) a high minentropy distribution \(D\), then \({h}\) can be used to efficiently sample a distribution \(D'\) of the same minentropy that is indistinguishable from \(D\) by circuits of fixed polynomial size. We state and prove our result in a more abstract setting, in which we allow arbitrary finite domains instead of \(\{0,1\}^n\), and arbitrary families of distinguishers, instead of fixed polynomial size circuits. Our result implies (a) the weak Szemeredi regularity Lemma of Frieze and Kannan (b) a constructive version of the dense model theorem of Green, Tao and Ziegler with better quantitative parameters (polynomial rather than exponential in the distinguishing probability), and (c) the Impagliazzo hardcore set Lemma. It appears to be the general result underlying the known connections between "regularity" results in graph theory, "decomposition" results in additive combinatorics, and the hardcore Lemma in complexity theory. We present two proofs of our result, one in the spirit of Nisan's proof of the hardcore Lemma via duality of linear programming, and one similar to Impagliazzo's "boosting" proof. A third proof by iterative partitioning, which gives the complexity of the sampler to be exponential in the distinguishing probability, is also implicit in the GreenTaoZiegler proofs of the dense model theorem.
CCC2009.pdf ECCC2008.pdf Mironov, Ilya, Omkant Pandey, Omer Reingold, and Salil Vadhan. “
Computational differential privacy.” In
S. Halevi, editor, Advances in Cryptology—CRYPTO ‘09, Lecture Notes in Computer Science, 5677:126142. SpringerVerlag, 2009.
Publisher's VersionAbstract
The definition of differential privacy has recently emerged as a leading standard of privacy guarantees for algorithms on statistical databases. We offer several relaxations of the definition which require privacy guarantees to hold only against efficient—i.e., computationallybounded—adversaries. We establish various relationships among these notions, and in doing so, we observe their close connection with the theory of pseudodense sets by Reingold et al.[1]. We extend the dense model theorem of Reingold et al. to demonstrate equivalence between two definitions (indistinguishabilityand simulatabilitybased) of computational differential privacy.
Our computational analogues of differential privacy seem to allow for more accurate constructions than the standard informationtheoretic analogues. In particular, in the context of private approximation of the distance between two vectors, we present a differentiallyprivate protocol for computing the approximation, and contrast it with a substantially more accurate protocol that is only computationally differentially private.
CRYPTO2009.pdf Lovett, Shachar, Omer Reingold, Luca Trevisan, and Salil Vadhan. “
Pseudorandom bit generators that fool modular sums.” In
Proceedings of the 13th International Workshop on Randomization and Computation (RANDOM ‘09), Lecture Notes in Computer Science, 5687:615630. SpringerVerlag, 2009.
Publisher's VersionAbstract
We consider the following problem: for given \(n, M,\) produce a sequence \(X_1, X_2, . . . , X_n\) of bits that fools every linear test modulo \(M\). We present two constructions of generators for such sequences. For every constant prime power \(M\), the first construction has seed length \(O_M(\log(n/\epsilon))\), which is optimal up to the hidden constant. (A similar construction was independently discovered by Meka and Zuckerman [MZ]). The second construction works for every \(M, n,\) and has seed length \(O(\log n + \log(M/\epsilon) \log( M \log(1/\epsilon)))\).
The problem we study is a generalization of the problem of constructing small bias distributions [NN], which are solutions to the \(M=2\) case. We note that even for the case \(M=3\) the best previously known con structions were generators fooling general boundedspace computations, and required \(O(\log^2 n)\) seed length.
For our first construction, we show how to employ recently constructed generators for sequences of elements of \(\mathbb{Z}_M\) that fool smalldegree polynomials modulo \(M\). The most interesting technical component of our second construction is a variant of the derandomized graph squaring operation of [RV]. Our generalization handles a product of two distinct graphs with distinct bounds on their expansion. This is then used to produce pseudorandom walks where each step is taken on a different regular directed graph (rather than pseudorandom walks on a single regular directed graph as in [RTV, RV]).
RANDOM2009.pdf