# Alfred P. Sloan Research Fellowship

2021
Vadhan, Salil, and Tianhao Wang. “Concurrent composition of differential privacy” (2021). ArXiv VersionAbstract
We initiate a study of the composition properties of interactive differentially private mechanisms. An interactive differentially private mechanism is an algorithm that allows an analyst to adaptively ask queries about a sensitive dataset, with the property that an adversarial analyst's view of the interaction is approximately the same regardless of whether or not any individual's data is in the dataset. Previous studies of composition of differential privacy have focused on non-interactive algorithms, but interactive mechanisms are needed to capture many of the intended applications of differential privacy and a number of the important differentially private primitives.

We focus on concurrent composition, where an adversary can arbitrarily interleave its queries to several differentially private mechanisms, which may be feasible when differentially private query systems are deployed in practice. We prove that when the interactive mechanisms being composed are pure differentially private, their concurrent composition achieves privacy parameters (with respect to pure or approximate differential privacy) that match the (optimal) composition theorem for noninteractive differential privacy. We also prove a composition theorem for interactive mechanisms that satisfy approximate differential privacy. That bound is weaker than even the basic (suboptimal) composition theorem for noninteractive differential privacy, and we leave closing the gap as a direction for future research, along with understanding concurrent composition for other variants of differential privacy.
2012
Schoenebeck, Grant, and Salil Vadhan. “The computational complexity of Nash equilibria in concisely represented games.” ACM Transactions on Computation Theory 4, no. 2 (2012). Publisher's VersionAbstract

Version History: Preliminary versions as ECCC TOR05-52 (https://eccc.weizmann.ac.il/report/2005/052/; attached as ECCC2005.pdf) and in EC '06 (https://dl.acm.org/doi/10.1145/1134707.1134737; attached as EC2006.pdf).

Games may be represented in many different ways, and different representations of games affect the complexity of problems associated with games, such as finding a Nash equilibrium. The traditional method of representing a game is to explicitly list all the payoffs, but this incurs an exponential blowup as the number of agents grows. We study two models of concisely represented games: circuit games, where the payoffs are computed by a given boolean circuit, and graph games, where each agent’s payoff is a function of only the strategies played by its neighbors in a given graph. For these two models, we study the complexity of four questions: determining if a given strategy is a Nash equilibrium, finding a Nash equilibrium, determining if there exists a pure Nash equilibrium, and determining if there exists a Nash equilibrium in which the payoffs to a player meet some given guarantees. In many cases, we obtain tight results, showing that the problems are complete for various complexity classes.

2010
Guruswami, Venkatesan, and Salil Vadhan. “A lower bound on list size for list decoding.” IEEE Transactions on Information Theory 56, no. 11 (2010): 5681-5688. Publisher's VersionAbstract

Version History: Preliminary version published in RANDOM '05 (https://link.springer.com/chapter/10.1007/11538462_27) and attached as RANDOM2005.pdf.

q-ary error-correcting code $$C ⊆ \{1,2,...,q\}^n$$ is said to be list decodable to radius $$\rho$$ with list size L if every Hamming ball of radius ρ contains at most L codewords of C. We prove that in order for a q-ary code to be list-decodable up to radius $$(1–1/q)(1–ε)n$$, we must have $$L = Ω(1/ε^2)$$. Specifically, we prove that there exists a constant $$c_q >0$$ and a function $$f_q$$ such that for small enough $$ε > 0$$, if C is list-decodable to radius$$(1–1/q)(1–ε)n$$with list size $$c_q /ε^2$$, then C has at most $$f q (ε)$$codewords, independent of n. This result is asymptotically tight (treating q as a constant), since such codes with an exponential (in n) number of codewords are known for list size $$L = O(1/ε^2)$$.

A result similar to ours is implicit in Blinovsky [Bli] for the binary $$(q=2)$$ case. Our proof works for all alphabet sizes, and is technically and conceptually simpler.

2008
Sanghvi, Saurabh, and Salil Vadhan. “The round complexity of two-party random selection.” SIAM Journal on Computing: Special Issue on STOC '05 38, no. 2 (2008): 523-550. Publisher's VersionAbstract

Version History. Preliminary versions of this work appeared in the first author's undergraduate thesis and in the conference paper (STOC '05).

We study the round complexity of two-party protocols for generating a random $$n$$-bit string such that the output is guaranteed to have bounded “bias,” even if one of the two parties deviates from the protocol (possibly using unlimited computational resources). Specifically, we require that the output’s statistical difference from the uniform distribution on $$\{0, 1\}^n$$ is bounded by a constant less than 1. We present a protocol for the above problem that has $$2\log^* n + O(1)$$ rounds, improving a previous 2$$n$$-round protocol of Goldreich, Goldwasser, and Linial (FOCS ’91). Like the GGL Protocol, our protocol actually provides a stronger guarantee, ensuring that the output lands in any set $$T ⊆ \{0, 1\}^n$$ of density $$μ$$ with probability at most $$O( \sqrt{μ + δ})$$, where $$δ$$ may be an arbitrarily small constant. We then prove a nearly matching lower bound, showing that any protocol guaranteeing bounded statistical difference requires at least $$\log^* n−\log^* \log^* n−O(1)$$ rounds. We also prove several results for the case when the output’s bias is measured by the maximum multiplicative factor by which a party can increase the probability of a set $$T ⊆ \{0, 1\}^n$$

2007
Ron, Dana, Amir Rosenfeld, and Salil Vadhan. “The hardness of the expected decision depth problem.” Information Processing Letters 101, no. 3 (2007): 112-118. Publisher's VersionAbstract

Given a function $$f$$ over $$n$$ binary variables, and an ordering of the $$n$$ variables, we consider the Expected Decision Depth problem. Namely, what is the expected number of bits that need to be observed until the value of the function is determined, when bits of the input are observed according to the given order. Our main finding is that this problem is (essentially) #P-complete. Moreover, the hardness holds even when the function f is represented as a decision tree.

2006
Ben-Sasson, Eli, Oded Goldreich, Prahladh Harsha, Madhu Sudan, and Salil Vadhan. “Robust PCPs of proximity, shorter PCPs, and applications to coding.” SIAM Journal on Computing: Special Issue on Randomness and Complexity 36, no. 4 (2006): 889-974. Publisher's VersionAbstract

Version History. Extended abstract in STOC '04.

We continue the study of the trade-off between the length of probabilistically checkable proofs (PCPs) and their query complexity, establishing the following main results (which refer to proofs of satisfiability of circuits of size $$n$$):

1. We present PCPs of length $$\exp(o(\log \log n)^2) ·n$$ that can be verified by making $$o(\log \log n)$$ Boolean queries.
2. For every $$ε > 0$$, we present PCPs of length $$\exp(\log^ε n) · n$$ that can be verified by making a constant number of Boolean queries.

In both cases, false assertions are rejected with constant probability (which may be set to be arbitrarily close to 1). The multiplicative overhead on the length of the proof, introduced by transforming a proof into a probabilistically checkable one, is just quasi polylogarithmic in the first case (of query complexity $$o(\log \log n)$$), and is $$2^{(\log n)^ε}$$, for any $$ε > 0$$, in the second case (of constant query complexity). Our techniques include the introduction of a new variant of PCPs that we call “robust PCPs of proximity.” These new PCPs facilitate proof composition, which is a central ingredient in the construction of PCP systems. (A related notion and its composition properties were discovered independently by Dinur and Reingold.) Our main technical contribution is a construction of a “length-efficient” robust PCP of proximity. While the new construction uses many of the standard techniques used in PCP constructions, it does differ from previous constructions in fundamental ways, and in particular does not use the “parallelization” step of Arora et al. [J. ACM, 45 (1998), pp. 501–555]. The alternative approach may be of independent interest. We also obtain analogous quantitative results for locally testable codes. In addition, we introduce a relaxed notion of locally decodable codes and present such codes mapping $$k$$ information bits to codewords of length $$k^{1+ε}$$ for any $$ε > 0$$.

Healy, Alex, Salil Vadhan, and Emanuele Viola. “Using nondeterminism to amplify hardness.” SIAM Journal on Computing: Special Issue on STOC '04 35, no. 4 (2006): 903-931. Publisher's VersionAbstract

We revisit the problem of hardness amplification in $$\mathcal{NP}$$ as recently studied by O’Donnell [J. Comput. System Sci., 69 (2004), pp. 68–94]. We prove that if $$\mathcal{NP}$$ has a balanced function $$f$$ such that any circuit of size $$s(n)$$ fails to compute $$f$$ on a $$1/\mathrm{poly}(n)$$ fraction of inputs, then $$\mathcal{NP}$$ has a function $$f'$$ such that any circuit of size $$s'(n) = s(\sqrt{n})^{\Omega(1)}$$ fails to compute $$f$$ on a $$1/2 − 1/s' (n)$$ fraction of inputs. In particular,

1. if $$s(n) = n^{\omega(1)}$$, we amplify to hardness $$1/2 - 1/n^{\omega(1)}$$;
2. if $$s(n) = 2^{n^{\Omega(1)}}$$, we amplify to hardness $$1/2 - 1/2^{n^{\Omega(1)}}$$;
3. if $$s(n) = 2^{\Omega(n)}$$, we amplify to hardness $$1/2 - 1/2^{\Omega(\sqrt{n})}$$.

Our results improve those of of O’Donnell, which amplify to$$1/2 - 1/ \sqrt{n}$$. O’Donnell also proved that no construction of a certain general form could amplify beyond $$1/2 - 1/n$$. We bypass this barrier by using both derandomization and nondeterminism in the construction of $$f'$$. We also prove impossibility results demonstrating that both our use of nondeterminism and the hypothesis that $$f$$ is balanced are necessary for “black-box” hardness amplification procedures (such as ours).

Vadhan, Salil. “An unconditional study of computational zero knowledge.” SIAM Journal on Computing: Special Issue on Randomness and Complexity 36, no. 4 (2006): 1160-1214. Publisher's VersionAbstract

Version History: Extended abstract in FOCS '04.

We prove a number of general theorems about $$\mathbf{ZK}$$, the class of problems possessing (computational) zero-knowledge proofs. Our results are unconditional, in contrast to most previous works on $$\mathbf{ZK}$$, which rely on the assumption that one-way functions exist. We establish several new characterizations of $$\mathbf{ZK}$$ and use these characterizations to prove results such as the following:

1. Honest-verifier $$\mathbf{ZK}$$ equals general $$\mathbf{ZK}$$.
2. Public-coin $$\mathbf{ZK}$$ equals private-coin $$\mathbf{ZK}$$.
3. $$\mathbf{ZK}$$ is closed under union.
4. $$\mathbf{ZK}$$ with imperfect completeness equals $$\mathbf{ZK}$$ with perfect completeness.
5. Any problem in $$\mathbf{ZK}$$ $$\cap$$ $$\mathbf{NP}$$ can be proven in computational zero knowledge by a $$\mathbf{BPP^{NP}}$$prover.
6. $$\mathbf{ZK}$$ with black-box simulators equals $$\mathbf{ZK}$$ with general, non–black-box simulators.

The above equalities refer to the resulting class of problems (and do not necessarily preserve other efficiency measures such as round complexity). Our approach is to combine the conditional techniques previously used in the study of $$\mathbf{ZK}$$ with the unconditional techniques developed in the study of $$\mathbf{SZK}$$, the class of problems possessing statistical zero-knowledge proofs. To enable this combination, we prove that every problem in $$\mathbf{ZK}$$ can be decomposed into a problem in $$\mathbf{SZK}$$ together with a set of instances from which a one-way function can be constructed.

Reingold, Omer, Luca Trevisan, and Salil Vadhan. “Pseudorandom walks in regular digraphs and the RL vs. L problem.” In Proceedings of the 38th Annual ACM Symposium on Theory of Computing (STOC ‘06), 457-466, 2006, 457-466. Publisher's VersionAbstract

Version History: Preliminary version as ECCC TR05-22, February 2005 (https://eccc.weizmann.ac.il/report/2005/022/; attached as ECCC2005.pdf).

We revisit the general $$\mathbf{RL}$$ vs. $$\mathbf{L}$$ question, obtaining the following results.

1. Generalizing Reingold’s techniques to directed graphs, we present a deterministic, log-space algorithm that given a regular directed graph G (or, more generally, a digraph with Eulerian connected components) and two vertices s and t, finds a path between s and t if one exists.
2. If we restrict ourselves to directed graphs that are regular and consistently labelled, then we are able to produce pseudorandom walks for such graphs in logarithmic space (this result already found an independent application).
3. We prove that if (2) could be generalized to all regular directed graphs (including ones that are not consistently labelled) then $$\mathbf{L=RL}$$. We do so by exhibiting a new complete promise problem for $$\mathbf{RL}$$, and showing that such a problem can be solved in deterministic logarithmic space given a log-space pseudorandom walk generator for regular directed graphs.
2005
Trevisan, Luca, Salil Vadhan, and David Zuckerman. “Compression of samplable sources.” Computational Complexity: Special Issue on CCC'04 14, no. 3 (2005): 186-227. Publisher's VersionAbstract

We study the compression of polynomially samplable sources. In particular, we give efficient prefix-free compression and decompression algorithms for three classes of such sources (whose support is a subset of $$\{0, 1\}^n$$).

1. We show how to compress sources $$X$$ samplable by logspace machines to expected length $$H(X) + O(1)$$. Our next results concern flat sources whose support is in $$\mathbf{P}$$.
2. If $$H(X) ≤ k = n−O(\log n)$$, we show how to compress to expected length $$k + \mathrm{polylog}(n − k)$$.
3. If the support of $$X$$ is the witness set for a self-reducible $$\mathbf{NP}$$ relation, then we show how to compress to expected length $$H(X)+ 5$$.
Ben-Sasson, Eli, Oded Goldreich, Prahladh Harsha, Madhu Sudan, and Salil Vadhan. “Short PCPs verifiable in polylogarithmic time.” In Proceedings of the 20th Annual IEEE Conference on Computational Complexity (CCC '05), 120-134, 2005, 120-134. Publisher's VersionAbstract
We show that every language in NP has a probabilistically checkable proof of proximity (i.e., proofs asserting that an instance is "close" to a member of the language), where the verifier's running time is polylogarithmic in the input size and the length of the probabilistically checkable proof is only polylogarithmically larger that the length of the classical proof. (Such a verifier can only query polylogarithmically many bits of the input instance and the proof. Thus it needs oracle access to the input as well as the proof, and cannot guarantee that the input is in the language - only that it is close to some string in the language.) If the verifier is restricted further in its query complexity and only allowed q queries, then the proof size blows up by a factor of 2/sup (log n)c/q/ where the constant c depends only on the language (and is independent of q). Our results thus give efficient (in the sense of running time) versions of the shortest known PCPs, due to Ben-Sasson et al. (STOC '04) and Ben-Sasson and Sudan (STOC '05), respectively. The time complexity of the verifier and the size of the proof were the original emphases in the definition of holographic proofs, due to Babai et al. (STOC '91), and our work is the first to return to these emphases since their work. Of technical interest in our proof is a new complete problem for NEXP based on constraint satisfaction problems with very low complexity constraints, and techniques to arithmetize such constraints over fields of small characteristic.