Computational Entropy (NSF CCF-1116616)

2018
Raghunathan, Ananth, Gil Segev, and Salil P. Vadhan. “Deterministic public-key encryption for adaptively-chosen plaintext distributions.” Journal of Cryptology 31, no. 4 (2018): 1012-1063. Publisher's VersionAbstract

Version History: Preliminary versions in EUROCRYPT ‘13 and Cryptology ePrint report 2013/125.

Bellare, Boldyreva, and O’Neill (CRYPTO '07) initiated the study of deterministic public-key encryption as an alternative in scenarios where randomized encryption has inherent drawbacks. The resulting line of research has so far guaranteed security only for adversarially-chosen plaintext distributions that are independent of the public key used by the scheme. In most scenarios, however, it is typically not realistic to assume that adversaries do not take the public key into account when attacking a scheme.

We show that it is possible to guarantee meaningful security even for plaintext distributions that depend on the public key. We extend the previously proposed notions of security, allowing adversaries to adaptively choose plaintext distributions after seeing the public key, in an interactive manner. The only restrictions we make are that: (1) plaintext distributions are unpredictable (as is essential in deterministic public-key encryption), and (2) the number of plaintext distributions from which each adversary is allowed to adaptively choose is upper bounded by \(2^p\), where \(p\) can be any predetermined polynomial in the security parameter. For example, with \(p=0\) we capture plaintext distributions that are independent of the public key, and with \(p=0(s \log s)\) we capture, in particular, all plaintext distributions that are samplable by circuits of size \(s\).

Within our framework we present both constructions in the random-oracle model based on any public-key encryption scheme, and constructions in the standard model based on lossy trapdoor functions (thus, based on a variety of number-theoretic assumptions). Previously known constructions heavily relied on the independence between the plaintext distributions and the public key for the purposes of randomness extraction. In our setting, however, randomness extraction becomes significantly more challenging once the plaintext distributions and the public key are no longer independent. Our approach is inspired by research on randomness extraction from seed-dependent distributions. Underlying our approach is a new generalization of a method for such randomness extraction, originally introduced by Trevisan and Vadhan (FOCS '00) and Dodis (PhD Thesis, MIT, '00).

JCRYPTOL2018.pdf EUROCRYPT2013.pdf IACR2013.pdf
2017
Steinke, Thomas, Salil Vadhan, and Andrew Wan. “Pseudorandomness and Fourier growth bounds for width 3 branching programs.” Theory of Computing – Special Issue on APPROX-RANDOM 2014 13, no. 12 (2017): 1-50. Publisher's VersionAbstract

Version History: a conference version of this paper appeared in the Proceedings of the 18th International Workshop on Randomization and Computation (RANDOM'14). Full version posted as ECCC TR14-076 and arXiv:1405.7028 [cs.CC].

We present an explicit pseudorandom generator for oblivious, read-once, width-3 branching programs, which can read their input bits in any order. The generator has seed length \(Õ(\log^3 n)\).The previously best known seed length for this model is \(n^{1/2+o(1)}\) due to Impagliazzo, Meka, and Zuckerman (FOCS ’12). Our work generalizes a recent result of Reingold, Steinke, and Vadhan (RANDOM ’13) for permutation branching programs. The main technical novelty underlying our generator is a new bound on the Fourier growth of width-3, oblivious, read-once branching programs. Specifically, we show that for any \(f : \{0, 1\}^n → \{0, 1\}\) computed by such a branching program, and \(k ∈ [n]\),

 \(\displaystyle\sum_{s⊆[n]:|s|=k} \big| \hat{f}[s] \big | ≤n^2 ·(O(\log n))^k\),

where \(\hat{f}[s] = \mathbb{E}_U [f[U] \cdot (-1)^{s \cdot U}]\) is the standard Fourier transform over \(\mathbb{Z}^n_2\). The base \(O(\log n)\) of the Fourier growth is tight up to a factor of \(\log \log n\).

TOC 2017.pdf APPROX-RANDOM 2014.pdf ArXiv 2014.pdf
2014
Gopalan, Parikshit, Salil Vadhan, and Yuan Zhou. “Locally testable codes and Cayley graphs.” In In Moni Naor, editor, Innovations in Theoretical Computer Science (ITCS ‘14), 81-92. ACM, 2014. Publisher's VersionAbstract

Version History: Full version posted as https://arxiv.org/pdf/1308.5158.pdf.

We give two new characterizations of (\(\mathbb{F}_2\)-linear) locally testable error-correcting codes in terms of Cayley graphs over \(\mathbb{F}^h_2\) :

  1. A locally testable code is equivalent to a Cayley graph over \(\mathbb{F}^h_2\) whose set of generators is significantly larger than \(h\) and has no short linear dependencies, but yields a shortest-path metric that embeds into \(\ell_1\) with constant distortion. This extends and gives a converse to a result of Khot and Naor (2006), which showed that codes with large dual distance imply Cayley graphs that have no low-distortion embeddings into \(\ell_1\).

  2. A locally testable code is equivalent to a Cayley graph over \(\mathbb{F}^h_2\) that has significantly more than \(h\) eigenvalues near 1, which have no short linear dependencies among them and which “explain” all of the large eigenvalues. This extends and gives a converse to a recent construction of Barak et al. (2012), which showed that locally testable codes imply Cayley graphs that are small-set expanders but have many large eigenvalues.

ITCS2014.pdf ArXiv2018.pdf
2013
Mahmoody, Mohammad, Tal Moran, and Salil Vadhan. “Publicly verifiable proofs of sequential work.” In Innovations in Theoretical Computer Science (ITCS ‘13), 373-388. ACM, 2013. Publisher's VersionAbstract

Version HistoryPreliminary version posted as Cryptology ePrint Archive Report 2011/553, under title “Non-Interactive Time-Stamping and Proofs of Work in the Random Oracle Model”.

We construct a publicly verifiable protocol for proving computational work based on collision- resistant hash functions and a new plausible complexity assumption regarding the existence of “inherently sequential” hash functions. Our protocol is based on a novel construction of time-lock puzzles. Given a sampled “puzzle” \(\mathcal{P} \overset{$}\gets \mathbf{D}_n\), where \(n\) is the security parameter and \(\mathbf{D}_n\) is the distribution of the puzzles, a corresponding “solution” can be generated using \(N\) evaluations of the sequential hash function, where \(N > n\) is another parameter, while any feasible adversarial strategy for generating valid solutions must take at least as much time as \(\Omega(N)\) sequential evaluations of the hash function after receiving \(\mathcal{P}\). Thus, valid solutions constitute a “proof” that \(\Omega(N)\) parallel time elapsed since \(\mathcal{P}\) was received. Solutions can be publicly and efficiently verified in time \(\mathrm{poly}(n) \cdot \mathrm{polylog}(N)\). Applications of these “time-lock puzzles” include noninteractive timestamping of documents (when the distribution over the possible documents corresponds to the puzzle distribution \(\mathbf{D}_n\)) and universally verifiable CPU benchmarks.

Our construction is secure in the standard model under complexity assumptions (collision- resistant hash functions and inherently sequential hash functions), and makes black-box use of the underlying primitives. Consequently, the corresponding construction in the random oracle model is secure unconditionally. Moreover, as it is a public-coin protocol, it can be made non- interactive in the random oracle model using the Fiat-Shamir Heuristic.

Our construction makes a novel use of “depth-robust” directed acyclic graphs—ones whose depth remains large even after removing a constant fraction of vertices—which were previously studied for the purpose of complexity lower bounds. The construction bypasses a recent negative result of Mahmoody, Moran, and Vadhan (CRYPTO ‘11) for time-lock puzzles in the random oracle model, which showed that it is impossible to have time-lock puzzles like ours in the random oracle model if the puzzle generator also computes a solution together with the puzzle.

IACR2013.pdf ITCS2013.pdf
Vadhan, Salil, and Colin Jia Zheng. “A uniform min-max theorem with applications in cryptography.” In Ran Canetti and Juan Garay, editors, Advances in Cryptology—CRYPTO ‘13, Lecture Notes on Computer Science, 8042:93-110. Springer Verlag, Lecture Notes in Computer Science, 2013. Publisher's VersionAbstract
Version History: 
Full version published on ECCC2013 and IACR ePrint 2013.

We present a new, more constructive proof of von Neumann’s Min-Max Theorem for two-player zero-sum game — specifically, an algorithm that builds a near-optimal mixed strategy for the second player from several best-responses of the second player to mixed strategies of the first player. The algorithm extends previous work of Freund and Schapire (Games and Economic Behavior ’99) with the advantage that the algorithm runs in poly\((n)\) time even when a pure strategy for the first player is a distribution chosen from a set of distributions over \(\{0,1\}^n\). This extension enables a number of additional applications in cryptography and complexity theory, often yielding uniform security versions of results that were previously only proved for nonuniform security (due to use of the non-constructive Min-Max Theorem).

We describe several applications, including a more modular and improved uniform version of Impagliazzo’s Hardcore Theorem (FOCS ’95), showing impossibility of constructing succinct non-interactive arguments (SNARGs) via black-box reductions under uniform hardness assumptions (using techniques from Gentry and Wichs (STOC ’11) for the nonuniform setting), and efficiently simulating high entropy distributions within any sufficiently nice convex set (extending a result of Trevisan, Tulsiani and Vadhan (CCC ’09)).

CRYPTO2013.pdf ECCC2013.pdf
Reingold, Omer, Thomas Steinke, and Salil Vadhan. “Pseudorandomness for regular branching programs via Fourier analysis.” In Sofya Raskhodnikova and José Rolim, editors, Proceedings of the 17th International Workshop on Randomization and Computation (RANDOM ‘13), Lecture Notes in Computer Science, 8096:655-670. Springer-Verlag, 2013. Publisher's VersionAbstract

Version HistoryFull version posted as ECCC TR13-086 and arXiv:1306.3004 [cs.CC].

We present an explicit pseudorandom generator for oblivious, read-once, permutation branching programs of constant width that can read their input bits in any order. The seed length is \(O(\log^2n)\), where \(n\) is the length of the branching program. The previous best seed length known for this model was \(n^{1/2+o(1)}\), which follows as a special case of a generator due to Impagliazzo, Meka, and Zuckerman (FOCS 2012) (which gives a seed length of \(s^{1/2+o(1)}\) for arbitrary branching programs of size \(s\)). Our techniques also give seed length \(n^{1/2+o(1)}\) for general oblivious, read-once branching programs of width \(2^{n^{o(1)}}\)) , which is incomparable to the results of Impagliazzo et al.

Our pseudorandom generator is similar to the one used by Gopalan et al. (FOCS 2012) for read-once CNFs, but the analysis is quite different; ours is based on Fourier analysis of branching programs. In particular, we show that an oblivious, read-once, regular branching program of width \(w\) has Fourier mass at most \((2w^2)^k\) at level \(k\), independent of the length of the program.

RANDOM2013.pdf ArXiv2013.pdf
2012
Dodis, Yevgeniy, Thomas Ristenpart, and Salil Vadhan. “Randomness condensers for efficiently samplable, seed-dependent sources.” In Ronald Cramer, editor, Proceedings of the 9th IACR Theory of Cryptography Conference (TCC ‘12), Lecture Notes on Computer Science, 7194:618-635. Springer-Verlag, 2012. Publisher's VersionAbstract

We initiate a study of randomness condensers for sources that are efficiently samplable but may depend on the seed of the condenser. That is, we seek functions \(\mathsf{Cond} : \{0,1\}^n \times \{0,1\}^d \to \{0,1\}^m\)such that if we choose a random seed \(S \gets \{0,1\}^d\), and a source \(X = \mathcal{A}(S)\) is generated by a randomized circuit \(\mathcal{A}\) of size \(t\) such that \(X\) has min- entropy at least \(k\) given \(S\), then \(\mathsf{Cond}(X ; S)\) should have min-entropy at least some \(k'\) given \(S\). The distinction from the standard notion of randomness condensers is that the source \(X\) may be correlated with the seed \(S\) (but is restricted to be efficiently samplable). Randomness extractors of this type (corresponding to the special case where \(k' = m\)) have been implicitly studied in the past (by Trevisan and Vadhan, FOCS ‘00).

We show that:

  • Unlike extractors, we can have randomness condensers for samplable, seed-dependent sources whose computational complexity is smaller than the size \(t\) of the adversarial sampling algorithm \(\mathcal{A}\). Indeed, we show that sufficiently strong collision-resistant hash functions are seed-dependent condensers that produce outputs with min-entropy \(k' = m – \mathcal{O}(\log t)\), i.e. logarithmic entropy deficiency.

  • Randomness condensers suffice for key derivation in many cryptographic applications: when an adversary has negligible success probability (or negligible “squared advantage” [3]) for a uniformly random key, we can use instead a key generated by a condenser whose output has logarithmic entropy deficiency.

  • Randomness condensers for seed-dependent samplable sources that are robust to side information generated by the sampling algorithm imply soundness of the Fiat-Shamir Heuristic when applied to any constant-round, public-coin interactive proof system.

IACR2012.pdf
Vadhan, Salil, and Colin Jia Zheng. “Characterizing pseudoentropy and simplifying pseudorandom generator constructions.” In Proceedings of the 44th Annual ACM Symposium on Theory of Computing (STOC ‘12), 817-836. ACM, 2012. Publisher's VersionAbstract

Version HistoryFull version posted as ECCC TR11-141.

We provide a characterization of pseudoentropy in terms of hardness of sampling: Let (\(X, B\)) be jointly distributed random variables such that \(B\) takes values in a polynomial-sized set. We show that \(B\) is computationally indistinguishable from a random variable of higher Shannon entropy given \(X\) if and only if there is no probabilistic polynomial-time \(S\) such that \((X, S(X))\) has small KL divergence from \((X, B)\). This can be viewed as an analogue of the Impagliazzo Hard- core Theorem (FOCS ‘95) for Shannon entropy (rather than min-entropy).

Using this characterization, we show that if \(f\) is a one-way function, then \((f(U_n), U_n)\) has “next-bit pseudoentropy” at least \(n + \log n\), establishing a conjecture of Haitner, Reingold, and Vadhan (STOC ‘10). Plugging this into the construction of Haitner et al., this yields a simpler construction of pseudorandom generators from one-way functions. In particular, the construction only performs hashing once, and only needs the hash functions that are randomness extractors (e.g. universal hash functions) rather than needing them to support “local list-decoding” (as in the Goldreich–Levin hardcore predicate, STOC ‘89).

With an additional idea, we also show how to improve the seed length of the pseudorandom generator to \(\tilde{O}(n^3)\), compared to \(\tilde{O}(n^4)\) in the construction of Haitner et al.

STOC2012.pdf ECCC2013.pdf
Dodis, Yevgeniy, Adriana López-Alt, Ilya Mironov, and Salil Vadhan. “Differential privacy with imperfect randomness.” In Ran Canetti and Rei Safavi-Naini, editors, Proceedings of the 32nd International Cryptology Conference (CRYPTO ‘12), Lecture Notes on Computer Science, 7417:497-516. Springer-Verlag, 2012. Publisher's VersionAbstract

In this work we revisit the question of basing cryptography on imperfect randomness. Bosley and Dodis (TCC’07) showed that if a source of randomness \(\mathcal{R}\) is “good enough” to generate a secret key capable of encrypting \(k\) bits, then one can deterministically extract nearly \(k\) almost uniform bits from \(\mathcal{R}\), suggesting that traditional privacy notions (namely, indistinguishability of encryption) requires an “extractable” source of randomness. Other, even stronger impossibility results are known for achieving privacy under specific “non-extractable” sources of randomness, such as the \(\gamma\)-Santha-Vazirani (SV) source, where each next bit has fresh entropy, but is allowed to have a small bias \(\gamma < 1\) (possibly depending on prior bits).

We ask whether similar negative results also hold for a more recent notion of privacy called differential privacy (Dwork et al., TCC’06), concentrating, in particular, on achieving differential privacy with the Santha-Vazirani source. We show that the answer is no. Specifically, we give a differentially private mechanism for approximating arbitrary “low sensitivity” functions that works even with randomness coming from a \(\gamma\)-Santha-Vazirani source, for any \(\gamma < 1\). This provides a somewhat surprising “separation” between traditional privacy and differential privacy with respect to imperfect randomness.

Interestingly, the design of our mechanism is quite different from the traditional “additive-noise” mechanisms (e.g., Laplace mechanism) successfully utilized to achieve differential privacy with perfect randomness. Indeed, we show that any (non-trivial) “SV-robust” mechanism for our problem requires a demanding property called consistent sampling, which is strictly stronger than differential privacy, and cannot be satisfied by any additive-noise mechanism.

CRYPTO2012.pdf IACR2012.pdf
Gopalan, Parikshit, Raghu Meka, Omer Reingold, Luca Tevisan, and Salil Vadhan. “Better pseudorandom generators via milder pseudorandom restrictions.” In Proceedings of the 53rd Annual IEEE Symposium on Foundations of Computer Science (FOCS ‘12), 120-129. IEEE, 2012. Publisher's VersionAbstract

Version HistoryFull version posted as ECCC TR12-123 and as arXiv:1210.0049 [cs.CC].


We present an iterative approach to constructing pseudorandom generators, based on the repeated application of mild pseudorandom restrictions. We use this template to construct pseudorandom generators for combinatorial rectangles and read-once \(\mathsf{CNF}\)s and a hitting set generator for width-3 branching programs, all of which achieve near-optimal seed-length even in the low-error regime: We get seed-length \(\tilde{O}(\log(n/\epsilon))\) for error \(\epsilon\). Previously, only constructions with seed-length \(O(log^{3/2}n)\) or \(O(log^2n)\)were known for these classes with error \(\epsilon = 1/\mathrm{poly}(n)\). The (pseudo)random restrictions we use are milder than those typically used for proving circuit lower bounds in that we only set a constant fraction of the bits at a time. While such restrictions do not simplify the functions drastically, we show that they can be derandomized using small-bias spaces.

ArXiv2012.pdf IEEE2012.pdf ECCC2012.pdf