%0 Journal Article %J IEEE Transactions on Information Theory %D 2010 %T A lower bound on list size for list decoding %A Venkatesan Guruswami %A Salil Vadhan %X

Version History: Preliminary version published in RANDOM '05 (https://link.springer.com/chapter/10.1007/11538462_27) and attached as RANDOM2005.pdf.

q-ary error-correcting code $$C ⊆ \{1,2,...,q\}^n$$ is said to be list decodable to radius $$\rho$$ with list size L if every Hamming ball of radius ρ contains at most L codewords of C. We prove that in order for a q-ary code to be list-decodable up to radius $$(1–1/q)(1–ε)n$$, we must have $$L = Ω(1/ε^2)$$. Specifically, we prove that there exists a constant $$c_q >0$$ and a function $$f_q$$ such that for small enough $$ε > 0$$, if C is list-decodable to radius$$(1–1/q)(1–ε)n$$with list size $$c_q /ε^2$$, then C has at most $$f q (ε)$$codewords, independent of n. This result is asymptotically tight (treating q as a constant), since such codes with an exponential (in n) number of codewords are known for list size $$L = O(1/ε^2)$$.

A result similar to ours is implicit in Blinovsky [Bli] for the binary $$(q=2)$$ case. Our proof works for all alphabet sizes, and is technically and conceptually simpler.

%B IEEE Transactions on Information Theory %V 56 %P 5681-5688 %G eng %U https://ieeexplore.ieee.org/document/5605366 %N 11 %0 Journal Article %J Computational Complexity %D 2010 %T Are PCPs inherent in efficient arguments? %A Guy Rothblum %A Salil Vadhan %X

Version HistorySpecial Issue on CCC '09.

Starting with Kilian (STOC ‘92), several works have shown how to use probabilistically checkable proofs (PCPs) and cryptographic primitives such as collision-resistant hashing to construct very efficient argument systems (a.k.a. computationally sound proofs), for example with polylogarithmic communication complexity. Ishai et al. (CCC ‘07) raised the question of whether PCPs are inherent in efficient arguments, and if so, to what extent. We give evidence that they are, by showing how to convert any argument system whose soundness is reducible to the security of some cryptographic primitive into a PCP system whose efficiency is related to that of the argument system and the reduction (under certain complexity assumptions).

%B Computational Complexity %V 19 %P 265-304 %G eng %U https://link.springer.com/article/10.1007/s00037-010-0291-3 %N 2 %0 Conference Paper %B Daniele Micciancio, editor, Proceedings of the 7th IACR Theory of Cryptography Conference (TCC ‘10), Lecture Notes on Computer Science %D 2010 %T Composition of zero-knowledge proofs with efficient provers %A Eleanor Birrell %A Salil Vadhan %X

We revisit the composability of different forms of zero- knowledge proofs when the honest prover strategy is restricted to be polynomial time (given an appropriate auxiliary input). Our results are:

1. When restricted to efficient provers, the original Goldwasser–Micali–Rackoff (GMR) definition of zero knowledge (STOC ‘85), here called plain zero knowledge, is closed under a constant number of sequential compositions (on the same input). This contrasts with the case of unbounded provers, where Goldreich and Krawczyk (ICALP ‘90, SICOMP ‘96) exhibited a protocol that is zero knowledge under the GMR definition, but for which the sequential composition of 2 copies is not zero knowledge.

2. If we relax the GMR definition to only require that the simulation is indistinguishable from the verifier’s view by uniform polynomial-time distinguishers, with no auxiliary input beyond the statement being proven, then again zero knowledge is not closed under sequential composition of 2 copies.

3. We show that auxiliary-input zero knowledge with efficient provers is not closed under parallel composition of 2 copies under the assumption that there is a secure key agreement protocol (in which it is easy to recognize valid transcripts). Feige and Shamir (STOC ‘90) gave similar results under the seemingly incomparable assumptions that (a) the discrete logarithm problem is hard, or (b) $$\mathcal{UP}\nsubseteq\mathcal{BPP}$$ and one-way functions exist.
%B Daniele Micciancio, editor, Proceedings of the 7th IACR Theory of Cryptography Conference (TCC ‘10), Lecture Notes on Computer Science %I Springer-Verlag %V 5978 %P 572-587 %G eng %U https://link.springer.com/chapter/10.1007/978-3-642-11799-2_34 %0 Conference Paper %B Proceedings of the 51st Annual IEEE Symposium on Foundations of Computer Science (FOCS ‘10) %D 2010 %T Boosting and differential privacy %A Cynthia Dwork %A Guy Rothblum %A Salil Vadhan %X Boosting is a general method for improving the accuracy of learning algorithms. We use boosting to construct improved privacy-preserving synopses of an input database. These are data structures that yield, for a given set $$\mathcal{Q}$$ of queries over an input database, reasonably accurate estimates of the responses to every query in $$\mathcal{Q}$$, even when the number of queries is much larger than the number of rows in the database. Given a base synopsis generator that takes a distribution on $$\mathcal{Q}$$ and produces a "weak" synopsis that yields "good" answers for a majority of the weight in $$\mathcal{Q}$$, our Boosting for Queries algorithm obtains a synopsis that is good for all of $$\mathcal{Q}$$. We ensure privacy for the rows of the database, but the boosting is performed on the queries. We also provide the first synopsis generators for arbitrary sets of arbitrary low-sensitivity queries, i.e., queries whose answers do not vary much under the addition or deletion of a single row. In the execution of our algorithm certain tasks, each incurring some privacy loss, are performed many times. To analyze the cumulative privacy loss, we obtain an $$O(\varepsilon^2)$$ bound on the expected privacy loss from a single $$\varepsilon$$-differentially private mechanism. Combining this with evolution of confidence arguments from the literature, we get stronger bounds on the expected cumulative privacy loss due to multiple mechanisms, each of which provides $$\varepsilon$$-differential privacy or one of its relaxations, and each of which operates on (potentially) different, adaptively chosen, databases. %B Proceedings of the 51st Annual IEEE Symposium on Foundations of Computer Science (FOCS ‘10) %I IEEE %P 51-60 %G eng %U https://ieeexplore.ieee.org/document/5670947 %0 Conference Paper %B T. Rabin, editor, Advances in Cryptology—CRYPTO ‘10, Lecture Notes in Computer Science %D 2010 %T Improved delegation of computation using fully homomorphic encryption %A Kai-Min Chung %A Yael Kalai %A Salil Vadhan %X

Version HistoryFull version posted as Cryptology ePrint Archive Report 210/241.

Following Gennaro, Gentry, and Parno (Cryptology ePrint Archive 2009/547), we use fully homomorphic encryption to design improved schemes for delegating computation. In such schemes, a delegator outsources the computation of a function $${F}$$ on many, dynamically chosen inputs $$x_i$$ to a worker in such a way that it is infeasible for the worker to make the delegator accept a result other than $${F}(x_i)$$. The “online stage” of the Gennaro et al. scheme is very efficient: the parties exchange two messages, the delegator runs in time poly$$(\log{T})$$, and the worker runs in time poly$$(T)$$, where $$T$$ is the time complexity of $$F$$. However, the “offline stage” (which depends on the function $$F$$ but not the inputs to be delegated) is inefficient: the delegator runs in time poly$$(T)$$ and generates a public key of length poly$$(T)$$ that needs to be accessed by the worker during the online stage.

Our first construction eliminates the large public key from the Gennaro et al. scheme. The delegator still invests poly$$(T)$$ time in the offline stage, but does not need to communicate or publish anything. Our second construction reduces the work of the delegator in the offline stage to poly$$(\log{T})$$ at the price of a 4-message (offline) interaction with a poly$$(T)$$-time worker (which need not be the same as the workers used in the online stage). Finally, we describe a “pipelined” implementation of the second construction that avoids the need to re-run the offline construction after errors are detected (assuming errors are not too frequent).

%B T. Rabin, editor, Advances in Cryptology—CRYPTO ‘10, Lecture Notes in Computer Science %I Springer-Verlag %V 6223 %P 483-501 %G eng %U https://link.springer.com/chapter/10.1007/978-3-642-14623-7_26 %0 Conference Paper %B Proceedings of the 51st Annual IEEE Symposium on Foundations of Computer Science (FOCS ‘10) %D 2010 %T The limits of two-party differential privacy %A McGregor, Andrew %A Ilya Mironov %A Toniann Pitassi %A Omer Reingold %A Kunal Talwar %A Salil Vadhan %X

Version History and Errata: Subsequent version published in ECCC 2011. Proposition 8 and Part (b) of Theorem 13 in the FOCS version are incorrect, and are removed from the ECCC version.

We study differential privacy in a distributed setting where two parties would like to perform analysis of their joint data while preserving privacy for both datasets. Our results imply almost tight lower bounds on the accuracy of such data analyses, both for specific natural functions (such as Hamming distance) and in general. Our bounds expose a sharp contrast between the two-party setting and the simpler client-server setting (where privacy guarantees are one-sided). In addition, those bounds demonstrate a dramatic gap between the accuracy that can be obtained by differentially private data analysis versus the accuracy obtainable when privacy is relaxed to a computational variant of differential privacy. The first proof technique we develop demonstrates a connection between differential privacy and deterministic extraction from Santha-Vazirani sources. A second connection we expose indicates that the ability to approximate a function by a low-error differentially private protocol is strongly related to the ability to approximate it by a low communication protocol. (The connection goes in both directions).

%B Proceedings of the 51st Annual IEEE Symposium on Foundations of Computer Science (FOCS ‘10) %I IEEE %P 81-90 %G eng %U https://ieeexplore.ieee.org/document/5670946