# Publications

2013
Mahmoody, Mohammad, Tal Moran, and Salil Vadhan. “Publicly verifiable proofs of sequential work.” In Innovations in Theoretical Computer Science (ITCS ‘13), 373-388. ACM, 2013. Publisher's VersionAbstract

Version HistoryPreliminary version posted as Cryptology ePrint Archive Report 2011/553, under title “Non-Interactive Time-Stamping and Proofs of Work in the Random Oracle Model”.

We construct a publicly verifiable protocol for proving computational work based on collision- resistant hash functions and a new plausible complexity assumption regarding the existence of “inherently sequential” hash functions. Our protocol is based on a novel construction of time-lock puzzles. Given a sampled “puzzle” $$\mathcal{P} \overset{}\gets \mathbf{D}_n$$, where $$n$$ is the security parameter and $$\mathbf{D}_n$$ is the distribution of the puzzles, a corresponding “solution” can be generated using $$N$$ evaluations of the sequential hash function, where $$N > n$$ is another parameter, while any feasible adversarial strategy for generating valid solutions must take at least as much time as $$\Omega(N)$$ sequential evaluations of the hash function after receiving $$\mathcal{P}$$. Thus, valid solutions constitute a “proof” that $$\Omega(N)$$ parallel time elapsed since $$\mathcal{P}$$ was received. Solutions can be publicly and efficiently verified in time $$\mathrm{poly}(n) \cdot \mathrm{polylog}(N)$$. Applications of these “time-lock puzzles” include noninteractive timestamping of documents (when the distribution over the possible documents corresponds to the puzzle distribution $$\mathbf{D}_n$$) and universally verifiable CPU benchmarks.

Our construction is secure in the standard model under complexity assumptions (collision- resistant hash functions and inherently sequential hash functions), and makes black-box use of the underlying primitives. Consequently, the corresponding construction in the random oracle model is secure unconditionally. Moreover, as it is a public-coin protocol, it can be made non- interactive in the random oracle model using the Fiat-Shamir Heuristic.

Our construction makes a novel use of “depth-robust” directed acyclic graphs—ones whose depth remains large even after removing a constant fraction of vertices—which were previously studied for the purpose of complexity lower bounds. The construction bypasses a recent negative result of Mahmoody, Moran, and Vadhan (CRYPTO ‘11) for time-lock puzzles in the random oracle model, which showed that it is impossible to have time-lock puzzles like ours in the random oracle model if the puzzle generator also computes a solution together with the puzzle.

Rothblum, Guy N., Salil Vadhan, and Avi Wigderson. “Interactive proofs of proximity: delegating computation in sublinear time.” In Proceedings of the 45th Annual ACM Symposium on Theory of Computing (STOC ‘13), 793-802. New York, NY: ACM, 2013. Publisher's VersionAbstract

We study interactive proofs with sublinear-time verifiers. These proof systems can be used to ensure approximate correctness for the results of computations delegated to an untrusted server. Following the literature on property testing, we seek proof systems where with high probability the verifier accepts every input in the language, and rejects every input that is far from the language. The verifier’s query complexity (and computation complexity), as well as the communication, should all be sublinear. We call such a proof system an Interactive Proof of Proximity (IPP).

• On the positive side, our main result is that all languages in $$\mathcal{NC}$$ have Interactive Proofs of Proximity with roughly $$\sqrt{n}$$ query and communication and complexities, and $$\mathrm{polylog} (n)$$ communication rounds.

This is achieved by identifying a natural language, membership in an affine subspace (for a structured class of subspaces), that is complete for constructing interactive proofs of proximity, and providing efficient protocols for it. In building an IPP for this complete language, we show a tradeoff between the query and communication complexity and the number of rounds. For example, we give a 2-round protocol with roughly $$n^{3/4}$$ queries and communication.

• On the negative side, we show that there exist natural languages in $$\mathcal{NC}^1$$, for which the sum of queries and communication in any constant-round interactive proof of proximity must be polynomially related to n. In particular, for any 2-round protocol, the sum of queries and communication must be at least $$\tilde{\Omega}(\sqrt{n})$$.

• Finally, we construct much better IPPs for specific functions, such as bipartiteness on random or well-mixing graphs, and the majority function. The query complexities of these protocols are provably better (by exponential or polynomial factors) than what is possible in the standard property testing model, i.e. without a prover.

Vadhan, Salil, and Colin Jia Zheng. “A uniform min-max theorem with applications in cryptography.” In Ran Canetti and Juan Garay, editors, Advances in Cryptology—CRYPTO ‘13, Lecture Notes on Computer Science, 8042:93-110. Springer Verlag, Lecture Notes in Computer Science, 2013. Publisher's VersionAbstract

Version History:

Full version published in ECCC2013 and IACR ePrint 2013.

We present a new, more constructive proof of von Neumann’s Min-Max Theorem for two-player zero-sum game — specifically, an algorithm that builds a near-optimal mixed strategy for the second player from several best-responses of the second player to mixed strategies of the first player. The algorithm extends previous work of Freund and Schapire (Games and Economic Behavior ’99) with the advantage that the algorithm runs in poly$$(n)$$ time even when a pure strategy for the first player is a distribution chosen from a set of distributions over $$\{0,1\}^n$$. This extension enables a number of additional applications in cryptography and complexity theory, often yielding uniform security versions of results that were previously only proved for nonuniform security (due to use of the non-constructive Min-Max Theorem).

We describe several applications, including a more modular and improved uniform version of Impagliazzo’s Hardcore Theorem (FOCS ’95), showing impossibility of constructing succinct non-interactive arguments (SNARGs) via black-box reductions under uniform hardness assumptions (using techniques from Gentry and Wichs (STOC ’11) for the nonuniform setting), and efficiently simulating high entropy distributions within any sufficiently nice convex set (extending a result of Trevisan, Tulsiani and Vadhan (CCC ’09)).

Reingold, Omer, Thomas Steinke, and Salil Vadhan. “Pseudorandomness for regular branching programs via Fourier analysis.” In Sofya Raskhodnikova and José Rolim, editors, Proceedings of the 17th International Workshop on Randomization and Computation (RANDOM ‘13), Lecture Notes in Computer Science, 8096:655-670. Springer-Verlag, 2013. Publisher's VersionAbstract

Version HistoryFull version posted as ECCC TR13-086 and arXiv:1306.3004 [cs.CC].

We present an explicit pseudorandom generator for oblivious, read-once, permutation branching programs of constant width that can read their input bits in any order. The seed length is $$O(\log^2n)$$, where $$n$$ is the length of the branching program. The previous best seed length known for this model was $$n^{1/2+o(1)}$$, which follows as a special case of a generator due to Impagliazzo, Meka, and Zuckerman (FOCS 2012) (which gives a seed length of $$s^{1/2+o(1)}$$ for arbitrary branching programs of size $$s$$). Our techniques also give seed length $$n^{1/2+o(1)}$$ for general oblivious, read-once branching programs of width $$2^{n^{o(1)}}$$) , which is incomparable to the results of Impagliazzo et al.

Our pseudorandom generator is similar to the one used by Gopalan et al. (FOCS 2012) for read-once CNFs, but the analysis is quite different; ours is based on Fourier analysis of branching programs. In particular, we show that an oblivious, read-once, regular branching program of width $$w$$ has Fourier mass at most $$(2w^2)^k$$ at level $$k$$, independent of the length of the program.

2012
Schoenebeck, Grant, and Salil Vadhan. “The computational complexity of Nash equilibria in concisely represented games.” ACM Transactions on Computation Theory 4, no. 2 (2012). Publisher's VersionAbstract

Version History: Preliminary versions as ECCC TOR05-52 (https://eccc.weizmann.ac.il/report/2005/052/; attached as ECCC2005.pdf) and in EC '06 (https://dl.acm.org/doi/10.1145/1134707.1134737; attached as EC2006.pdf).

Games may be represented in many different ways, and different representations of games affect the complexity of problems associated with games, such as finding a Nash equilibrium. The traditional method of representing a game is to explicitly list all the payoffs, but this incurs an exponential blowup as the number of agents grows. We study two models of concisely represented games: circuit games, where the payoffs are computed by a given boolean circuit, and graph games, where each agent’s payoff is a function of only the strategies played by its neighbors in a given graph. For these two models, we study the complexity of four questions: determining if a given strategy is a Nash equilibrium, finding a Nash equilibrium, determining if there exists a pure Nash equilibrium, and determining if there exists a Nash equilibrium in which the payoffs to a player meet some given guarantees. In many cases, we obtain tight results, showing that the problems are complete for various complexity classes.

Dodis, Yevgeniy, Thomas Ristenpart, and Salil Vadhan. “Randomness condensers for efficiently samplable, seed-dependent sources.” In Ronald Cramer, editor, Proceedings of the 9th IACR Theory of Cryptography Conference (TCC ‘12), Lecture Notes on Computer Science, 7194:618-635. Springer-Verlag, 2012. Publisher's VersionAbstract

We initiate a study of randomness condensers for sources that are efficiently samplable but may depend on the seed of the condenser. That is, we seek functions $$\mathsf{Cond} : \{0,1\}^n \times \{0,1\}^d \to \{0,1\}^m$$such that if we choose a random seed $$S \gets \{0,1\}^d$$, and a source $$X = \mathcal{A}(S)$$ is generated by a randomized circuit $$\mathcal{A}$$ of size $$t$$ such that $$X$$ has min- entropy at least $$k$$ given $$S$$, then $$\mathsf{Cond}(X ; S)$$ should have min-entropy at least some $$k'$$ given $$S$$. The distinction from the standard notion of randomness condensers is that the source $$X$$ may be correlated with the seed $$S$$ (but is restricted to be efficiently samplable). Randomness extractors of this type (corresponding to the special case where $$k' = m$$) have been implicitly studied in the past (by Trevisan and Vadhan, FOCS ‘00).

We show that:

• Unlike extractors, we can have randomness condensers for samplable, seed-dependent sources whose computational complexity is smaller than the size $$t$$ of the adversarial sampling algorithm $$\mathcal{A}$$. Indeed, we show that sufficiently strong collision-resistant hash functions are seed-dependent condensers that produce outputs with min-entropy $$k' = m – \mathcal{O}(\log t)$$, i.e. logarithmic entropy deficiency.

• Randomness condensers suffice for key derivation in many cryptographic applications: when an adversary has negligible success probability (or negligible “squared advantage” [3]) for a uniformly random key, we can use instead a key generated by a condenser whose output has logarithmic entropy deficiency.

• Randomness condensers for seed-dependent samplable sources that are robust to side information generated by the sampling algorithm imply soundness of the Fiat-Shamir Heuristic when applied to any constant-round, public-coin interactive proof system.

Vadhan, Salil, and Colin Jia Zheng. “Characterizing pseudoentropy and simplifying pseudorandom generator constructions.” In Proceedings of the 44th Annual ACM Symposium on Theory of Computing (STOC ‘12), 817-836. ACM, 2012. Publisher's VersionAbstract

Version HistoryFull version posted as ECCC TR11-141.

We provide a characterization of pseudoentropy in terms of hardness of sampling: Let ($$X, B$$) be jointly distributed random variables such that $$B$$ takes values in a polynomial-sized set. We show that $$B$$ is computationally indistinguishable from a random variable of higher Shannon entropy given $$X$$ if and only if there is no probabilistic polynomial-time $$S$$ such that $$(X, S(X))$$ has small KL divergence from $$(X, B)$$. This can be viewed as an analogue of the Impagliazzo Hard- core Theorem (FOCS ‘95) for Shannon entropy (rather than min-entropy).

Using this characterization, we show that if $$f$$ is a one-way function, then $$(f(U_n), U_n)$$ has “next-bit pseudoentropy” at least $$n + \log n$$, establishing a conjecture of Haitner, Reingold, and Vadhan (STOC ‘10). Plugging this into the construction of Haitner et al., this yields a simpler construction of pseudorandom generators from one-way functions. In particular, the construction only performs hashing once, and only needs the hash functions that are randomness extractors (e.g. universal hash functions) rather than needing them to support “local list-decoding” (as in the Goldreich–Levin hardcore predicate, STOC ‘89).

With an additional idea, we also show how to improve the seed length of the pseudorandom generator to $$\tilde{O}(n^3)$$, compared to $$\tilde{O}(n^4)$$ in the construction of Haitner et al.

Thaler, Justin, Jonathan Ullman, and Salil Vadhan. “Faster algorithms for privately releasing marginals.” In Artur Czumaj, Kurt Mehlhorn, Andrew M. Pitts, and Roger Wattenhofer, editors, Proceedings of the 39th International Colloquium on Automata, Languages, and Programming (ICALP ‘12), Lecture Notes on Computer Science, 7391:810-821. Springer-Verlag, 2012. Publisher's VersionAbstract

Version History: Full version posted as arXiv:1205.1758v2.

We study the problem of releasing k-way marginals of a database $$D ∈ ({0,1}^d)^n$$, while preserving differential privacy. The answer to a $$k$$-way marginal query is the fraction of $$D$$’s records $$x \in \{0,1\}^d$$ with a given value in each of a given set of up to $$k$$ columns. Marginal queries enable a rich class of statistical analyses of a dataset, and designing efficient algorithms for privately releasing marginal queries has been identified as an important open problem in private data analysis (cf. Barak et. al., PODS ’07).

We give an algorithm that runs in time $$d^{O(\sqrt{K})}$$ and releases a private summary capable of answering any $$k$$-way marginal query with at most ±.01 error on every query as long as $$n \geq d^{O(\sqrt{K})}$$. To our knowledge, ours is the first algorithm capable of privately releasing marginal queries with non-trivial worst-case accuracy guarantees in time substantially smaller than the number of k-way marginal queries, which is $$d^{\Theta(k)}$$ (for $$k ≪ d$$).

Dodis, Yevgeniy, Adriana López-Alt, Ilya Mironov, and Salil Vadhan. “Differential privacy with imperfect randomness.” In Ran Canetti and Rei Safavi-Naini, editors, Proceedings of the 32nd International Cryptology Conference (CRYPTO ‘12), Lecture Notes on Computer Science, 7417:497-516. Springer-Verlag, 2012. Publisher's VersionAbstract

In this work we revisit the question of basing cryptography on imperfect randomness. Bosley and Dodis (TCC’07) showed that if a source of randomness $$\mathcal{R}$$ is “good enough” to generate a secret key capable of encrypting $$k$$ bits, then one can deterministically extract nearly $$k$$ almost uniform bits from $$\mathcal{R}$$, suggesting that traditional privacy notions (namely, indistinguishability of encryption) requires an “extractable” source of randomness. Other, even stronger impossibility results are known for achieving privacy under specific “non-extractable” sources of randomness, such as the $$\gamma$$-Santha-Vazirani (SV) source, where each next bit has fresh entropy, but is allowed to have a small bias $$\gamma < 1$$ (possibly depending on prior bits).

We ask whether similar negative results also hold for a more recent notion of privacy called differential privacy (Dwork et al., TCC’06), concentrating, in particular, on achieving differential privacy with the Santha-Vazirani source. We show that the answer is no. Specifically, we give a differentially private mechanism for approximating arbitrary “low sensitivity” functions that works even with randomness coming from a $$\gamma$$-Santha-Vazirani source, for any $$\gamma < 1$$. This provides a somewhat surprising “separation” between traditional privacy and differential privacy with respect to imperfect randomness.

Interestingly, the design of our mechanism is quite different from the traditional “additive-noise” mechanisms (e.g., Laplace mechanism) successfully utilized to achieve differential privacy with perfect randomness. Indeed, we show that any (non-trivial) “SV-robust” mechanism for our problem requires a demanding property called consistent sampling, which is strictly stronger than differential privacy, and cannot be satisfied by any additive-noise mechanism.

Dwork, Cynthia, Moni Naor, and Salil Vadhan. “The privacy of the analyst and the power of the state.” In Proceedings of the 53rd Annual IEEE Symposium on Foundations of Computer Science (FOCS ‘12), 400-409. IEEE, 2012. Publisher's VersionAbstract
We initiate the study of privacy for the analyst in differentially private data analysis. That is, not only will we be concerned with ensuring differential privacy for the data (i.e. individuals or customers), which are the usual concern of differential privacy, but we also consider (differential) privacy for the set of queries posed by each data analyst. The goal is to achieve privacy with respect to other analysts, or users of the system. This problem arises only in the context of stateful privacy mechanisms, in which the responses to queries depend on other queries posed (a recent wave of results in the area utilized cleverly coordinated noise and state in order to allow answering privately hugely many queries). We argue that the problem is real by proving an exponential gap between the number of queries that can be answered (with non-trivial error) by stateless and stateful differentially private mechanisms. We then give a stateful algorithm for differentially private data analysis that also ensures differential privacy for the analyst and can answer exponentially many queries.
Gopalan, Parikshit, Raghu Meka, Omer Reingold, Luca Tevisan, and Salil Vadhan. “Better pseudorandom generators via milder pseudorandom restrictions.” In Proceedings of the 53rd Annual IEEE Symposium on Foundations of Computer Science (FOCS ‘12), 120-129. IEEE, 2012. Publisher's VersionAbstract

Version HistoryFull version posted as ECCC TR12-123 and as arXiv:1210.0049 [cs.CC].

We present an iterative approach to constructing pseudorandom generators, based on the repeated application of mild pseudorandom restrictions. We use this template to construct pseudorandom generators for combinatorial rectangles and read-once $$\mathsf{CNF}$$s and a hitting set generator for width-3 branching programs, all of which achieve near-optimal seed-length even in the low-error regime: We get seed-length $$\tilde{O}(\log(n/\epsilon))$$ for error $$\epsilon$$. Previously, only constructions with seed-length $$O(log^{3/2}n)$$ or $$O(log^2n)$$were known for these classes with error $$\epsilon = 1/\mathrm{poly}(n)$$. The (pseudo)random restrictions we use are milder than those typically used for proving circuit lower bounds in that we only set a constant fraction of the bits at a time. While such restrictions do not simplify the functions drastically, we show that they can be derandomized using small-bias spaces.

2011
Kamp, Jesse, Anup Rao, Salil Vadhan, and David Zuckerman. “Deterministic extractors for small-space sources.” Journal of Computer and System Sciences 77, no. 1 (2011): 191-220. Publisher's VersionAbstract

Version History: Special issue to celebrate Richard Karp's Kyoto Prize. Extended abstract in STOC '06.

We give polynomial-time, deterministic randomness extractors for sources generated in small space, where we model space $$s$$ sources on$$\{0,1\}^n$$ as sources generated by width $$2^s$$ branching programs. Specifically, there is a constant $$η > 0$$ such that for any $$ζ > n^{−η}$$, our algorithm extracts $$m = (δ − ζ)n$$ bits that are exponentially close to uniform (in variation distance) from space $$s$$ sources with min-entropy $$δn$$, where $$s = Ω(ζ^ 3n)$$. Previously, nothing was known for $$δ \ll 1/2,$$, even for space $$0$$. Our results are obtained by a reduction to the class of total-entropy independent sources. This model generalizes both the well-studied models of independent sources and symbol-fixing sources. These sources consist of a set of $$r$$ independent smaller sources over $$\{0, 1\}^\ell$$, where the total min-entropy over all the smaller sources is $$k$$. We give deterministic extractors for such sources when $$k$$ is as small as $$\mathrm{polylog}(r)$$, for small enough $$\ell$$.

Chung, Kai-Min, Omer Reingold, and Salil Vadhan. “S-T connectivity on digraphs with a known stationary distribution.” In ACM Transactions on Algorithms. Vol. 7. 3rd ed. ACM, 2011. Publisher's VersionAbstract

Version history: Preliminary versions in CCC '07 and on ECCC (TR07-030).

We present a deterministic logspace algorithm for solving S-T Connectivity on directed graphs if: (i) we are given a stationary distribution of the random walk on the graph in which both of the input vertices $$s$$ and $$t$$ have nonnegligible probability mass and (ii) the random walk which starts at the source vertex $$s$$ has polynomial mixing time. This result generalizes the recent deterministic logspace algorithm for S-T Connectivity on undirected graphs [Reingold, 2008]. It identifies knowledge of the stationary distribution as the gap between the S-T Connectivity problems we know how to solve in logspace (L) and those that capture all of randomized logspace (RL).

Ullman, Jon, and Salil Vadhan. “PCPs and the hardness of generating synthetic data.” In Yuval Ishai, editor, Proceedings of the 8th IACR Theory of Cryptography Conference (TCC ‘11), Lecture Notes on Computer Science, 5978:572-587. Springer-Verlag, 2011. Publisher's VersionAbstract

Version HistoryFull version posted as ECCC TR10-017. Invited to J. Cryptology selected papers from TCC 2011.

Assuming the existence of one-way functions, we show that there is no polynomial-time, differentially private algorithm $$\mathcal{A}$$ that takes a database $$D ∈ ({0, 1}^d)^n$$ and outputs a “synthetic database” $$\hat{D}$$ all of whose two-way marginals are approximately equal to those of $$D$$. (A two-way marginal is the fraction of database rows $$x ∈ {0, 1}^d$$ with a given pair of values in a given pair of columns.) This answers a question of Barak et al. (PODS ‘07), who gave an algorithm running in time $$\mathrm{poly} (n, 2^d)$$.

Our proof combines a construction of hard-to-sanitize databases based on digital signatures (by Dwork et al., STOC ‘09) with encodings based on probabilistically checkable proofs.

We also present both negative and positive results for generating “relaxed” synthetic data, where the fraction of rows in $$D$$ satisfying a predicate $$c$$ are estimated by applying $$c$$ to each row of $$\hat{D}$$ and aggregating the results in some way.

Dvir, Zeev, Dan Gutfreund, Guy Rothblum, and Salil Vadhan. “On approximating the entropy of polynomial mappings.” In Proceedings of the Second Symposium on Innovations in Computer Science (ICS 2011), 460-475. Tsinghua University Press, 2011. Publisher's VersionAbstract

Version HistoryFull version posted as ECCC TR10-60.

We investigate the complexity of the following computational problem:

Polynomial Entropy Approximation (PEA): Given a low-degree polynomial mapping $$p : \mathbb{F}^n → \mathbb{F}^m$$, where F is a finite field, approximate the output entropy $$H(p(U_n))$$, where $$U_n$$ is the uniform distribution on $$\mathbb{F}^n$$ and $$H$$ may be any of several entropy measures.

We show:

• Approximating the Shannon entropy of degree 3 polynomials $$p : \mathbb{F}_2^n \to \mathbb{F}^m_2$$ over $$\mathbb{F}_2$$ to within an additive constant (or even $$n^.9$$) is complete for $$\mathbf{SZKP_L}$$, the class of problems having statistical zero-knowledge proofs where the honest verifier and its simulator are computable in logarithmic space. ($$\mathbf{SZKP_L}$$contains most of the natural problems known to be in the full class $$\mathbf{SZKP}$$.)

• For prime fields $$\mathbb{F} \neq \mathbb{F}_2$$ and homogeneous quadratic polynomials $$p : \mathbb{F}^n \to \mathbb{F}^m$$, there is a probabilistic polynomial-time algorithm that distinguishes the case that $$p(U_n)$$) has entropy smaller than k from the case that $$p(U_n))$$ has min-entropy (or even Renyi entropy) greater than $$(2 + o(1))k$$.

• For degree d polynomials $$p : \mathbb{F}^n_2 \to \mathbb{F}^m_2$$ , there is a polynomial-time algorithm that distinguishes the case that $$p(U_n)$$ has max-entropy smaller than $$k$$ (where the max-entropy of a random variable is the logarithm of its support size) from the case that $$p(U_n)$$ has max-entropy at least $$(1 + o(1)) \cdot k^d$$ (for fixed $$d$$ and large $$k$$).

Mahmoody, Mohammad, Tal Moran, and Salil Vadhan. “Time-lock puzzles in the random oracle model.” In P. Rogaway, editor, Advances in Cryptology—CRYPTO ‘11, Lecture Notes in Computer Science, 6841:39-50. Springer-Verlag, 2011. Publisher's VersionAbstract

A time-lock puzzle is a mechanism for sending messages “to the future”. The sender publishes a puzzle whose solution is the message to be sent, thus hiding it until enough time has elapsed for the puzzle to be solved. For time-lock puzzles to be useful, generating a puzzle should take less time than solving it. Since adversaries may have access to many more computers than honest solvers, massively parallel solvers should not be able to produce a solution much faster than serial ones.

To date, we know of only one mechanism that is believed to satisfy these properties: the one proposed by Rivest, Shamir and Wagner (1996), who originally introduced the notion of time-lock puzzles. Their puzzle is based on the serial nature of exponentiation and the hardness of factoring, and is therefore vulnerable to advances in factoring techniques (as well as to quantum attacks).

In this work, we study the possibility of constructing time-lock puzzles in the random-oracle model. Our main result is negative, ruling out time-lock puzzles that require more parallel time to solve than the total work required to generate a puzzle. In particular, this should rule out black-box constructions of such time-lock puzzles from one-way permutations and collision-resistant hash-functions. On the positive side, we construct a time-lock puzzle with a linear gap in parallel time: a new puzzle can be generated with one round of $${n}$$ parallel queries to the random oracle, but $${n}$$ rounds of serial queries are required to solve it (even for massively parallel adversaries).

2010
Guruswami, Venkatesan, and Salil Vadhan. “A lower bound on list size for list decoding.” IEEE Transactions on Information Theory 56, no. 11 (2010): 5681-5688. Publisher's VersionAbstract

Version History: Preliminary version published in RANDOM '05 (https://link.springer.com/chapter/10.1007/11538462_27) and attached as RANDOM2005.pdf.

q-ary error-correcting code $$C ⊆ \{1,2,...,q\}^n$$ is said to be list decodable to radius $$\rho$$ with list size L if every Hamming ball of radius ρ contains at most L codewords of C. We prove that in order for a q-ary code to be list-decodable up to radius $$(1–1/q)(1–ε)n$$, we must have $$L = Ω(1/ε^2)$$. Specifically, we prove that there exists a constant $$c_q >0$$ and a function $$f_q$$ such that for small enough $$ε > 0$$, if C is list-decodable to radius$$(1–1/q)(1–ε)n$$with list size $$c_q /ε^2$$, then C has at most $$f q (ε)$$codewords, independent of n. This result is asymptotically tight (treating q as a constant), since such codes with an exponential (in n) number of codewords are known for list size $$L = O(1/ε^2)$$.

A result similar to ours is implicit in Blinovsky [Bli] for the binary $$(q=2)$$ case. Our proof works for all alphabet sizes, and is technically and conceptually simpler.

Rothblum, Guy, and Salil Vadhan. “Are PCPs inherent in efficient arguments?Computational Complexity 19, no. 2 (2010): 265-304. Publisher's VersionAbstract

Version HistorySpecial Issue on CCC '09.

Starting with Kilian (STOC ‘92), several works have shown how to use probabilistically checkable proofs (PCPs) and cryptographic primitives such as collision-resistant hashing to construct very efficient argument systems (a.k.a. computationally sound proofs), for example with polylogarithmic communication complexity. Ishai et al. (CCC ‘07) raised the question of whether PCPs are inherent in efficient arguments, and if so, to what extent. We give evidence that they are, by showing how to convert any argument system whose soundness is reducible to the security of some cryptographic primitive into a PCP system whose efficiency is related to that of the argument system and the reduction (under certain complexity assumptions).

Birrell, Eleanor, and Salil Vadhan. “Composition of zero-knowledge proofs with efficient provers.” In Daniele Micciancio, editor, Proceedings of the 7th IACR Theory of Cryptography Conference (TCC ‘10), Lecture Notes on Computer Science, 5978:572-587. Springer-Verlag, 2010. Publisher's VersionAbstract

We revisit the composability of different forms of zero- knowledge proofs when the honest prover strategy is restricted to be polynomial time (given an appropriate auxiliary input). Our results are:

1. When restricted to efficient provers, the original Goldwasser–Micali–Rackoff (GMR) definition of zero knowledge (STOC ‘85), here called plain zero knowledge, is closed under a constant number of sequential compositions (on the same input). This contrasts with the case of unbounded provers, where Goldreich and Krawczyk (ICALP ‘90, SICOMP ‘96) exhibited a protocol that is zero knowledge under the GMR definition, but for which the sequential composition of 2 copies is not zero knowledge.

2. If we relax the GMR definition to only require that the simulation is indistinguishable from the verifier’s view by uniform polynomial-time distinguishers, with no auxiliary input beyond the statement being proven, then again zero knowledge is not closed under sequential composition of 2 copies.

3. We show that auxiliary-input zero knowledge with efficient provers is not closed under parallel composition of 2 copies under the assumption that there is a secure key agreement protocol (in which it is easy to recognize valid transcripts). Feige and Shamir (STOC ‘90) gave similar results under the seemingly incomparable assumptions that (a) the discrete logarithm problem is hard, or (b) $$\mathcal{UP}\nsubseteq\mathcal{BPP}$$ and one-way functions exist.
Haitner, Iftach, Thomas Holenstein, Omer Reingold, Salil Vadhan, and Hoeteck Wee. “Universal one-way hash functions via inaccessible entropy.” In Henri Gilbert, editor, Advances in Cryptology—EUROCRYPT ‘10, Lecture Notes on Computer Science, 6110:616-637. Springer-Verlag, 2010. Publisher's VersionAbstract

This paper revisits the construction of Universal One-Way Hash Functions (UOWHFs) from any one-way function due to Rompel (STOC 1990). We give a simpler construction of UOWHFs, which also obtains better efficiency and security. The construction exploits a strong connection to the recently introduced notion of inaccessible entropy (Haitner et al. STOC 2009). With this perspective, we observe that a small tweak of any one-way function $$f$$ is already a weak form of a UOWHF: Consider $$F(x', i)$$ that outputs the $$i$$-bit long prefix of $$f(x)$$. If $$F$$ were a UOWHF then given a random $$x$$ and $$i$$ it would be hard to come up with $$x' \neq x$$ such that $$F(x, i) = F(x', i)$$. While this may not be the case, we show (rather easily) that it is hard to sample $$x'$$ with almost full entropy among all the possible such values of $$x'$$. The rest of our construction simply amplifies and exploits this basic property.

With this and other recent works, we have that the constructions of three fundamental cryptographic primitives (Pseudorandom Generators, Statistically Hiding Commitments and UOWHFs) out of one-way functions are to a large extent unified. In particular, all three constructions rely on and manipulate computational notions of entropy in similar ways. Pseudorandom Generators rely on the well-established notion of pseudoentropy, whereas Statistically Hiding Commitments and UOWHFs rely on the newer notion of inaccessible entropy.