Sanghvi, Saurabh, and Salil Vadhan. “
The round complexity of twoparty random selection.”
SIAM Journal on Computing: Special Issue on STOC '05 38, no. 2 (2008): 523550.
Publisher's VersionAbstract
Version History. Preliminary versions of this work appeared in the first author's undergraduate thesis and in the conference paper (STOC '05).
We study the round complexity of twoparty protocols for generating a random \(n\)bit string such that the output is guaranteed to have bounded “bias,” even if one of the two parties deviates from the protocol (possibly using unlimited computational resources). Specifically, we require that the output’s statistical difference from the uniform distribution on \(\{0, 1\}^n\) is bounded by a constant less than 1. We present a protocol for the above problem that has \(2\log^* n + O(1)\) rounds, improving a previous 2\(n\)round protocol of Goldreich, Goldwasser, and Linial (FOCS ’91). Like the GGL Protocol, our protocol actually provides a stronger guarantee, ensuring that the output lands in any set \(T ⊆ \{0, 1\}^n\) of density \(μ\) with probability at most \(O( \sqrt{μ + δ})\), where \(δ\) may be an arbitrarily small constant. We then prove a nearly matching lower bound, showing that any protocol guaranteeing bounded statistical difference requires at least \(\log^* n−\log^* \log^* n−O(1)\) rounds. We also prove several results for the case when the output’s bias is measured by the maximum multiplicative factor by which a party can increase the probability of a set \(T ⊆ \{0, 1\}^n\)
SICOMP2008.pdf Chailloux, André, Dragos Florin Ciocan, Iordanis Kerenidis, and Salil Vadhan. “
Interactive and noninteractive zero knowledge are equivalent in the help model.”
In Proceedings of the Third Theory of Cryptography Conference (TCC '08), 4948:501534. SpringerVerlag, Lecture Notes in Computer Science, 2008.
Publisher's VersionAbstract
Version History:

Preliminary versions of this work previously appeared on the Cryptology ePrint Archive and in the second author’s undergraduate thesis.

Chailloux, A., Kerenidis, I.: The role of help in classical and quantum zeroknowledge. Cryptology ePrint Archive, Report 2007/421 (2007), http://eprint.iacr.org/

Ciocan, D.F., Vadhan, S.: Interactive and noninteractive zero knowledge coincide in the help model. Cryptology ePrint Archive, Report 2007/389 (2007), http://eprint.iacr.org/

Ciocan, D.: Constructions and characterizations of noninteractive zeroknowledge. Undergradute thesis, Harvard University (2007)
We show that interactive and noninteractive zeroknowledge are equivalent in the ‘help model’ of BenOr and Gutfreund (J. Cryptology, 2003). In this model, the shared reference string is generated by a probabilistic polynomialtime dealer who is given access to the statement to be proven. Our results do not rely on any unproven complexity assumptions and hold for statistical zero knowledge, for computational zero knowledge restricted to AM, and for quantum zero knowledge when the help is a pure quantum state.
TCC2008.pdf Ong, Shien Jin, and Salil Vadhan. “
An equivalence between zero knowledge and commitments.” In
R. Canetti, editor, Proceedings of the Third Theory of Cryptography Conference (TCC ‘08), 4948:482500. Springer Verlag, Lecture Notes in Computer Science, 2008.
Publisher's VersionAbstract
We show that a language in NP has a zeroknowledge protocol if and only if the language has an “instancedependent” commitment scheme. An instancedependent commitment schemes for a given language is a commitment scheme that can depend on an instance of the language, and where the hiding and binding properties are required to hold only on the yes and no instances of the language, respectively.
The novel direction is the only if direction. Thus, we confirm the widely held belief that commitments are not only sufficient for zero knowledge protocols, but necessary as well. Previous results of this type either held only for restricted types of protocols or languages, or used nonstandard relaxations of (instancedependent) commitment schemes.
TCC2008.pdf Gutfreund, Dan, and Salil Vadhan. “
Limitations on hardness vs. randomness under uniform reductions.” In
Proceedings of the 12th International Workshop on Randomization and Computation (RANDOM ‘08), Lecture Notes in Computer Science, 5171:469482. SpringerVerlag, 2008.
Publisher's VersionAbstract
We consider (uniform) reductions from computing a function \({f}\) to the task of distinguishing the output of some pseudorandom generator \({G}\) from uniform. Impagliazzo and Wigderson [10] and Trevisan and Vadhan [24] exhibited such reductions for every function \({f}\) in PSPACE. Moreover, their reductions are “black box,” showing how to use any distinguisher \({T}\), given as oracle, in order to compute \({f}\) (regardless of the complexity of \({T}\) ). The reductions are also adaptive, but with the restriction that queries of the same length do not occur in different levels of adaptivity. Impagliazzo and Wigderson [10] also exhibited such reductions for every function \({f}\) in EXP, but those reductions are not blackbox, because they only work when the oracle \({T}\) is computable by small circuits.
Our main results are that:
– Nonadaptive blackbox reductions as above can only exist for functions \({f}\) in BPPNP (and thus are unlikely to exist for all of PSPACE).
– Adaptive blackbox reductions, with the same restriction on the adaptivity as above, can only exist for functions \({f}\) in PSPACE (and thus are unlikely to exist for all of EXP).
Beyond shedding light on proof techniques in the area of hardness vs. randomness, our results (together with [10,24]) can be viewed in a more general context as identifying techniques that overcome limitations of blackbox reductions, which may be useful elsewhere in complexity theory (and the foundations of cryptography).
RANDOM2008.pdf Bogdanov, Andrej, Elchanan Mossel, and Salil Vadhan. “
The complexity of distinguishing Markov random fields.” In
Proceedings of the 12th International Workshop on Randomization and Computation (RANDOM ‘08), Lecture Notes in Computer Science, 5171:331342. SpringerVerlag, 2008.
Publisher's VersionAbstract
Markov random fields are often used to model high dimensional distributions in a number of applied areas. A number of recent papers have studied the problem of reconstructing a dependency graph of bounded degree from independent samples from the Markov random field. These results require observing samples of the distribution at all nodes of the graph. It was heuristically recognized that the problem of reconstructing the model where there are hidden variables (some of the variables are not observed) is much harder.
Here we prove that the problem of reconstructing boundeddegree models with hidden nodes is hard. Specifically, we show that unless NP = RP,

It is impossible to decide in randomized polynomial time if two mod els generate distributions whose statistical distance is at most 1/3 or at least 2/3.

Given two generating models whose statistical distance is promised to be at least 1/3, and oracle access to independent samples from one of the models, it is impossible to decide in randomized polynomial time which of the two samples is consistent with the model.
The second problem remains hard even if the samples are generated efficiently, albeit under a stronger assumption.
RANDOM2008.pdf Reingold, Omer, Luca Trevisan, Madhur Tulsiani, and Salil Vadhan. “
Dense subsets of pseudorandom sets.” In
Proceedings of the 49th Annual IEEE Symposium on Foundations of Computer Science (FOCS ‘08), 7685. IEEE, 2008.
Publisher's VersionAbstractA theorem of Green, Tao, and Ziegler can be stated (roughly) as follows: if R is a pseudorandom set, and D is a dense subset of R, then D may be modeled by a set M that is dense in the entire domain such that D and M are indistinguishable. (The precise statement refers to "measures" or distributions rather than sets.) The proof of this theorem is very general, and it applies to notions of pseudorandomness and indistinguishability defined in terms of any family of distinguishers with some mild closure properties. The proof proceeds via iterative partitioning and an energy increment argument, in the spirit of the proof of the weak Szemeredi regularity lemma. The "reduction" involved in the proof has exponential complexity in the distinguishing probability. We present a new proof inspired by Nisan's proof of Impagliazzo's hardcore set theorem. The reduction in our proof has polynomial complexity in the distinguishing probability and provides a new characterization of the notion of "pseudoentropy" of a distribution. A proof similar to ours has also been independently discovered by Gowers [2]. We also follow the connection between the two theorems and obtain a new proof of Impagliazzo's hardcore set theorem via iterative partitioning and energy increment. While our reduction has exponential complexity in some parameters, it has the advantage that the hardcore set is efficiently recognizable.
IEEE2008.pdf