Publications

2007
Canetti, Ran, Ron Rivest, Madhu Sudan, Luca Trevisan, Salil Vadhan, and Hoeteck Wee. “Amplifying collision-resistance: A complexity-theoretic treatment.” In A. Menezes, editor, Advances in Cryptology (CRYPTO '07), 4622:264-283. Lecture Notes in Computer Science, Springer-Verlag, 2007. Publisher's VersionAbstract

We initiate a complexity-theoretic treatment of hardness amplification for collision-resistant hash functions, namely the transformation of weakly collision-resistant hash functions into strongly collision-resistant ones in the standard model of computation. We measure the level of collision resistance by the maximum probability, over the choice of the key, for which an efficient adversary can find a collision. The goal is to obtain constructions with short output, short keys, small loss in adversarial complexity tolerated, and a good trade-off between compression ratio and computational complexity. We provide an analysis of several simple constructions, and show that many of the parameters achieved by our constructions are almost optimal in some sense.

CRYPTO2007.pdf
2006
Ben-Sasson, Eli, Oded Goldreich, Prahladh Harsha, Madhu Sudan, and Salil Vadhan. “Robust PCPs of proximity, shorter PCPs, and applications to coding.” SIAM Journal on Computing: Special Issue on Randomness and Complexity 36, no. 4 (2006): 889-974. Publisher's VersionAbstract

Version History. Extended abstract in STOC '04.

We continue the study of the trade-off between the length of probabilistically checkable proofs (PCPs) and their query complexity, establishing the following main results (which refer to proofs of satisfiability of circuits of size \(n\)):

  1. We present PCPs of length \(\exp(o(\log \log n)^2) ·n\) that can be verified by making \(o(\log \log n)\) Boolean queries.
  2. For every \( ε > 0\), we present PCPs of length \(\exp(\log^ε n) · n \) that can be verified by making a constant number of Boolean queries.

In both cases, false assertions are rejected with constant probability (which may be set to be arbitrarily close to 1). The multiplicative overhead on the length of the proof, introduced by transforming a proof into a probabilistically checkable one, is just quasi polylogarithmic in the first case (of query complexity \(o(\log \log n)\)), and is \(2^{(\log n)^ε} \), for any \(ε > 0\), in the second case (of constant query complexity). Our techniques include the introduction of a new variant of PCPs that we call “robust PCPs of proximity.” These new PCPs facilitate proof composition, which is a central ingredient in the construction of PCP systems. (A related notion and its composition properties were discovered independently by Dinur and Reingold.) Our main technical contribution is a construction of a “length-efficient” robust PCP of proximity. While the new construction uses many of the standard techniques used in PCP constructions, it does differ from previous constructions in fundamental ways, and in particular does not use the “parallelization” step of Arora et al. [J. ACM, 45 (1998), pp. 501–555]. The alternative approach may be of independent interest. We also obtain analogous quantitative results for locally testable codes. In addition, we introduce a relaxed notion of locally decodable codes and present such codes mapping \(k\) information bits to codewords of length \(k^{1+ε}\) for any \(ε > 0\).

SICOMP2006.pdf
Healy, Alex, Salil Vadhan, and Emanuele Viola. “Using nondeterminism to amplify hardness.” SIAM Journal on Computing: Special Issue on STOC '04 35, no. 4 (2006): 903-931. Publisher's VersionAbstract

We revisit the problem of hardness amplification in \(\mathcal{NP}\) as recently studied by O’Donnell [J. Comput. System Sci., 69 (2004), pp. 68–94]. We prove that if \(\mathcal{NP}\) has a balanced function \(f\) such that any circuit of size \(s(n)\) fails to compute \(f\) on a \(1/\mathrm{poly}(n)\) fraction of inputs, then \(\mathcal{NP}\) has a function \(f'\) such that any circuit of size \(s'(n) = s(\sqrt{n})^{\Omega(1)}\) fails to compute \(f\) on a \(1/2 − 1/s' (n)\) fraction of inputs. In particular,

  1. if \(s(n) = n^{\omega(1)}\), we amplify to hardness \(1/2 - 1/n^{\omega(1)}\);
  2. if \(s(n) = 2^{n^{\Omega(1)}}\), we amplify to hardness \(1/2 - 1/2^{n^{\Omega(1)}}\);
  3. if \(s(n) = 2^{\Omega(n)}\), we amplify to hardness \(1/2 - 1/2^{\Omega(\sqrt{n})}\).

Our results improve those of of O’Donnell, which amplify to\(1/2 - 1/ \sqrt{n}\). O’Donnell also proved that no construction of a certain general form could amplify beyond \(1/2 - 1/n\). We bypass this barrier by using both derandomization and nondeterminism in the construction of \(f'\). We also prove impossibility results demonstrating that both our use of nondeterminism and the hypothesis that \(f\) is balanced are necessary for “black-box” hardness amplification procedures (such as ours).

SICOMP2006.pdf
Vadhan, Salil. “An unconditional study of computational zero knowledge.” SIAM Journal on Computing: Special Issue on Randomness and Complexity 36, no. 4 (2006): 1160-1214. Publisher's VersionAbstract

Version History: Extended abstract in FOCS '04.

We prove a number of general theorems about \(\mathbf{ZK}\), the class of problems possessing (computational) zero-knowledge proofs. Our results are unconditional, in contrast to most previous works on \(\mathbf{ZK}\), which rely on the assumption that one-way functions exist. We establish several new characterizations of \(\mathbf{ZK}\) and use these characterizations to prove results such as the following:

  1. Honest-verifier \(\mathbf{ZK}\) equals general \(\mathbf{ZK}\).
  2. Public-coin \(\mathbf{ZK}\) equals private-coin \(\mathbf{ZK}\).
  3. \(\mathbf{ZK}\) is closed under union.
  4. \(\mathbf{ZK}\) with imperfect completeness equals \(\mathbf{ZK}\) with perfect completeness.
  5. Any problem in \(\mathbf{ZK}\) \(\cap\) \(\mathbf{NP} \) can be proven in computational zero knowledge by a \(\mathbf{BPP^{NP}}\)prover.
  6. \(\mathbf{ZK}\) with black-box simulators equals \(\mathbf{ZK}\) with general, non–black-box simulators.

The above equalities refer to the resulting class of problems (and do not necessarily preserve other efficiency measures such as round complexity). Our approach is to combine the conditional techniques previously used in the study of \(\mathbf{ZK}\) with the unconditional techniques developed in the study of \(\mathbf{SZK}\), the class of problems possessing statistical zero-knowledge proofs. To enable this combination, we prove that every problem in \(\mathbf{ZK}\) can be decomposed into a problem in \(\mathbf{SZK}\) together with a set of instances from which a one-way function can be constructed.

SICOMP2006.pdf
Reingold, Omer, Luca Trevisan, and Salil Vadhan. “Pseudorandom walks in regular digraphs and the RL vs. L problem.” In Proceedings of the 38th Annual ACM Symposium on Theory of Computing (STOC ‘06), 457-466, 2006, 457-466. Publisher's VersionAbstract

Version History: Preliminary version as ECCC TR05-22, February 2005 (https://eccc.weizmann.ac.il/report/2005/022/; attached as ECCC2005.pdf).

We revisit the general \(\mathbf{RL}\) vs. \(\mathbf{L}\) question, obtaining the following results.

  1. Generalizing Reingold’s techniques to directed graphs, we present a deterministic, log-space algorithm that given a regular directed graph G (or, more generally, a digraph with Eulerian connected components) and two vertices s and t, finds a path between s and t if one exists.
  2. If we restrict ourselves to directed graphs that are regular and consistently labelled, then we are able to produce pseudorandom walks for such graphs in logarithmic space (this result already found an independent application).
  3. We prove that if (2) could be generalized to all regular directed graphs (including ones that are not consistently labelled) then \(\mathbf{L=RL}\). We do so by exhibiting a new complete promise problem for \(\mathbf{RL}\), and showing that such a problem can be solved in deterministic logarithmic space given a log-space pseudorandom walk generator for regular directed graphs.
ACM2006.pdf ECCC2005.pdf
Micciancio, Daniele, Shien Jin Ong, Amit Sahai, and Salil Vadhan. “Concurrent zero knowledge without complexity assumptions.” In S. Halevi and T. Rabin, eds., Proceedings of the Third Theory of Cryptography Conference (TCC '06), 3876:1-20. New York, NY, USA: Springer Verlag, Lecture Notes in Computer Science, 2006. Publisher's VersionAbstract

Version History. Full version available at https://eccc.weizmann.ac.il//eccc-reports/2005/TR05-093/ (Attached as ECCC2005).

We provide unconditional constructions of concurrent statistical zero-knowledge proofs for a variety of non-trivial problems (not known to have probabilistic polynomial-time algorithms). The problems include Graph Isomorphism, Graph Nonisomorphism, Quadratic Residuosity, Quadratic Nonresiduosity, a restricted version of Statistical Difference, and approximate versions of the (\(\mathsf{coNP}\) forms of the) Shortest Vector Problem and Closest Vector Problem in lattices. For some of the problems, such as Graph Isomorphism and Quadratic Residuosity, the proof systems have provers that can be implemented in polynomial time (given an \(\mathsf{NP}\) witness) and have \(\tilde{O}(\log n)\) rounds, which is known to be essentially optimal for black-box simulation. To the best of our knowledge, these are the first constructions of concurrent zero-knowledge proofs in the plain, asynchronous model (i.e., without setup or timing assumptions) that do not require complexity assumptions (such as the existence of one-way functions).

ECCC2005.pdf TCC2006.pdf
Nguyen, Minh, and Salil Vadhan. “Zero knowledge with efficient provers.” In Proceedings of the 38th Annual ACM Symposium on Theory of Computing (STOC ‘06), 287-295. ACM, 2006. Publisher's VersionAbstract
We prove that every problem in NP that has a zero-knowledge proof also has a zero-knowledge proof where the prover can be implemented in probabilistic polynomial time given an NP witness. Moreover, if the original proof system is statistical zero knowledge, so is the resulting efficient-prover proof system. An equivalence of zero knowledge and efficient-prover zero knowledge was previously known only under the assumption that one-way functions exist (whereas our result is unconditional), and no such equivalence was known for statistical zero knowledge. Our results allow us to translate the many general results and characterizations known for zero knowledge with inefficient provers to zero knowledge with efficient provers.
STOC2006.pdf
Gradwohl, Ronen, Salil Vadhan, and David Zuckerman. “Random selection with an adversarial majority.” In Advances in Cryptology—CRYPTO ‘06, C. Dwork, ed. 4117:409–426. Springer Verlag, Lecture Notes in Computer Science, 2006. Publisher's VersionAbstract

Version History: Full version published in ECCC TR 06-026, February 2006. Updated full version published June 2006.

We consider the problem of random selection, where \(p\) players follow a protocol to jointly select a random element of a universe of size \(n\). However, some of the players may be adversarial and collude to force the output to lie in a small subset of the universe. We describe essentially the first protocols that solve this problem in the presence of a dishonest majority in the full-information model (where the adversary is computationally unbounded and all communication is via non-simultaneous broadcast). Our protocols are nearly optimal in several parameters, including the round complexity (as a function of \(n\)), the randomness complexity, the communication complexity, and the tradeoffs between the fraction of honest players, the probability that the output lies in a small subset of the universe, and the density of this subset.

CRYPTO2006.pdf ECCC-02.2006.pdf FULL-06.2006.pdf
Nguyen, Minh-Huyen, Shien Jin Ong, and Salil Vadhan. “Statistical zero-knowledge arguments for NP from any one-way function.” In Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS ‘06), 3-13. IEEE, 2006. Publisher's VersionAbstract

Version History: Merged with STOC '07 paper of Haitner and Reingold. Also available as a journal version. Full version invited to SIAM J. Computing Special Issue on FOCS ‘06

We show that every language in NP has a statistical zero-knowledge argument system under the (minimal) complexity assumption that one-way functions exist. In such protocols, even a computationally unbounded verifier cannot learn anything other than the fact that the assertion being proven is true, whereas a polynomial-time prover cannot convince the verifier to accept a false assertion except with negligible probability. This resolves an open question posed by Naor et al. (1998). Departing from previous works on this problem, we do not construct standard statistically hiding commitments from any one-way function. Instead, we construct a relaxed variant of commitment schemes called "1-out-of-2-binding commitments," recently introduced by Nguyen et al. (2006)

FOCS2006.pdf
2005
Trevisan, Luca, Salil Vadhan, and David Zuckerman. “Compression of samplable sources.” Computational Complexity: Special Issue on CCC'04 14, no. 3 (2005): 186-227. Publisher's VersionAbstract

We study the compression of polynomially samplable sources. In particular, we give efficient prefix-free compression and decompression algorithms for three classes of such sources (whose support is a subset of \(\{0, 1\}^n\)).

  1. We show how to compress sources \(X\) samplable by logspace machines to expected length \(H(X) + O(1)\). Our next results concern flat sources whose support is in \(\mathbf{P}\).
  2. If \(H(X) ≤ k = n−O(\log n)\), we show how to compress to expected length \(k + \mathrm{polylog}(n − k)\).
  3. If the support of \(X\) is the witness set for a self-reducible \(\mathbf{NP}\) relation, then we show how to compress to expected length \(H(X)+ 5\).
CC2005.pdf
Ben-Sasson, Eli, Oded Goldreich, Prahladh Harsha, Madhu Sudan, and Salil Vadhan. “Short PCPs verifiable in polylogarithmic time.” In Proceedings of the 20th Annual IEEE Conference on Computational Complexity (CCC '05), 120-134, 2005, 120-134. Publisher's VersionAbstract
We show that every language in NP has a probabilistically checkable proof of proximity (i.e., proofs asserting that an instance is "close" to a member of the language), where the verifier's running time is polylogarithmic in the input size and the length of the probabilistically checkable proof is only polylogarithmically larger that the length of the classical proof. (Such a verifier can only query polylogarithmically many bits of the input instance and the proof. Thus it needs oracle access to the input as well as the proof, and cannot guarantee that the input is in the language - only that it is close to some string in the language.) If the verifier is restricted further in its query complexity and only allowed q queries, then the proof size blows up by a factor of 2/sup (log n)c/q/ where the constant c depends only on the language (and is independent of q). Our results thus give efficient (in the sense of running time) versions of the shortest known PCPs, due to Ben-Sasson et al. (STOC '04) and Ben-Sasson and Sudan (STOC '05), respectively. The time complexity of the verifier and the size of the proof were the original emphases in the definition of holographic proofs, due to Babai et al. (STOC '91), and our work is the first to return to these emphases since their work. Of technical interest in our proof is a new complete problem for NEXP based on constraint satisfaction problems with very low complexity constraints, and techniques to arithmetize such constraints over fields of small characteristic.
Rozenman, Eyal, and Salil Vadhan. “Derandomized squaring of graphs.” In Proceedings of the 8th International Workshop on Randomization and Computation (RANDOM '05), 3624:436-447. Berkeley, CA: Springer Verlag, Lecture Notes in Computer Science, 2005. Publisher's VersionAbstract

We introduce a “derandomized” analogue of graph squaring. This operation increases the connectivity of the graph (as measured by the second eigenvalue) almost as well as squaring the graph does, yet only increases the degree of the graph by a constant factor, instead of squaring the degree.

One application of this product is an alternative proof of Reingold’s recent breakthrough result that S-T Connectivity in Undirected Graphs can be solved in deterministic logspace.

RANDOM2005.pdf
2003
Sahai, Amit, and Salil Vadhan. “A complete problem for statistical zero knowledge.Journal of the ACM 50, no. 2 (2003): 196-249.Abstract
We present the first complete problem for SZK, the class of (promise) problems possessing statistical zero-knowledge proofs (against an honest verifier). The problem, called STATISTICAL DIFFERENCE, is to decide whether two efficiently samplable distributions are either statistically close or far apart. This gives a new characterization of SZK that makes no reference to interaction or zero knowledge.

 

We propose the use of complete problems to unify and extend the study of statistical zero knowledge. To this end, we examine several consequences of our Completeness Theorem and its proof, such as:

  • A way to make every (honest-verifier) statistical zero-knowledge proof very communication efficient, with the prover sending only one bit to the verifier (to achieve soundness error 1/2).
  • Simpler proofs of many of the previously known results about statistical zero knowledge, such as the Fortnow and Aiello--Håstad upper bounds on the complexity of SZK and Okamoto's result that SZK is closed under complement.
  • Strong closure properties of SZK which amount to constructing statistical zero-knowledge proofs for complex assertions built out of simpler assertions already shown to be in SZK.
  • New results about the various measures of "knowledge complexity," including a collapse in the hierarchy corresponding to knowledge complexity in the "hint" sense.
  • Algorithms for manipulating the statistical difference between efficiently samplable distributions, including transformations which "polarize" and "reverse" the statistical relationship between a pair of distributions.
complete_problem_statistical_zero_knowledge.pdf
2002
Bender, Michael A., Antonio Fernández, Dana Ron, Amit Sahai, and Salil Vadhan. “The power of a pebble: exploring and mapping directed graphs.Information and Computation 176, no. 1 (2002): 1-21.Abstract
Exploring and mapping an unknown environment is a fundamental problem that is studied in a variety of contexts. Many results have focused on finding efficient solutions to restricted versions of the problem. In this paper, we consider a model that makes very limited assumptions about the environment and solve the mapping problem in this general setting.

We model the environment by an unknown directed graph G, and consider the problem of a robot exploring and mapping G. The edges emanating from each vertex are numbered from `1' to `d', but we do not assume that the vertices of G are labeled. Since the robot has no way of distinguishing between vertices, it has no hope of succeeding unless it is given some means of distinguishing between vertices. For this reason we provide the robot with a "pebble" --- a device that it can place on a vertex and use to identify the vertex later.

In this paper we show:

  1. If the robot knows an upper bound on the number of vertices then it can learn the graph efficiently with only one pebble.
  2. If the robot does not know an upper bound on the number of vertices n, then Theta(loglog n) pebbles are both necessary and sufficient.
In both cases our algorithms are deterministic. 
power_of_pebble.pdf
2001
Vadhan, Salil. “The complexity of counting in sparse, regular, and planar graphs.SIAM Journal on Computing 31, no. 2 (2001): 398-427.Abstract
We show that a number of graph-theoretic counting problems remain NP-hard, indeed #P-complete, in very restricted classes of graphs. In particular, we prove that the problems of counting matchings, vertex covers, independent sets, and extremal variants of these all remain hard when restricted to planar bipartite graphs of bounded degree or regular graphs of constant degree. We obtain corollaries about counting cliques in restricted classes of graphs and counting satisfying assignments to restricted classes of monotone 2-CNF formulae. To achieve these results, a new interpolation-based reduction technique which preserves properties such as constant degree is introduced.
complexity_counting_sparse_regular_planar_graphs.pdf
1999
Sahai, Amit, and Salil Vadhan. “ Manipulating statistical difference.Randomization Methods in Algorithm Design (DIMACS Workshop, December 1997), volume 43 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science 43 (1999): 251-270.Abstract

We give several efficient transformations for manipulating the statistical difference (variation distance) between a pair of probability distributions. The effects achieved include increasing the statistical difference, decreasing the statistical difference, "polarizing" the statistical relationship, and "reversing" the statistical relationship. We also show that a boolean formula whose atoms are statements about statistical difference can be transformed into a single statement about statistical difference. All of these transformations can be performed in polynomial time, in the sense that, given circuits which sample from the input distributions, it only takes polynomial time to compute circuits which sample from the output distributions.

 

By our prior work (see FOCS 97), such transformations for manipulating statistical difference are closely connected to results about SZK, the class of languages possessing statistical zero-knowledge proofs. In particular, some of the transformations given in this paper are equivalent to the closure of SZK under complement and under certain types of Turing reductions. This connection is also discussed briefly in this paper.

manipulating_statistical_difference.pdf
Wallner, D., E. Harder, and R. Agee. “Key management for multicast: Issues and architectures.Internet RFC 2627, no. June 1999 (1999).Abstract

This report contains a discussion of the difficult problem of key management for multicast communication sessions.  It focuses on two main areas of concern with respect to key management, which are, initializing the multicast group with a common net key and rekeying the multicast group.  A rekey may be necessary upon the compromise of a user or for other reasons (e.g., periodic rekey).  In particular, this report identifies a technique which allows for secure compromise recovery, while also being robust against collusion of excluded users.  This is one important feature of multicast key management which has not been addressed in detail by most other multicast key management proposals [1,2,4].  The benefits of this proposed technique are that it minimizes the number of transmissions required to rekey the multicast group and it imposes minimal storage requirements on the multicast group.

1998
Goldreich, Oded, Amit Sahai, and Salil Vadhan. “Honest-verifier statistical zero-knowledge equals general statistical zero-knowledge.Proceedings of the 30th Annual ACM Symposium on Theory of Computing (STOC ‘98) (1998): 399-408.Abstract
We show how to transform any interactive proof system which is statistical zero-knowledge with respect to the honest-verifier, into a proof system which is statistical zero-knowledge with respect to any verifier. This is done by limiting the behavior of potentially cheating verifiers, without using computational assumptions or even referring to the complexity of such verifier strategies. (Previous transformations have either relied on computational assumptions or were applicable only to constant-round public-coin proof systems.)

Our transformation also applies to public-coin (aka Arthur-Merlin) computational zero-knowledge proofs: We transform any Arthur-Merlin proof system which is computational zero-knowledge with respect to the honest-verifier, into an Arthur-Merlin proof system which is computational zero-knowledge with respect to any probabilistic polynomial-time verifier.

A crucial ingredient in our analysis is a new lemma regarding 2-universal hashing functions.

honest-verifier_statistical_zero-knowledge.pdf

Pages