TransWikia.com

Any fundamental papers in TCS which were found to be incorrect/wrong later?

Theoretical Computer Science Asked on October 30, 2021

I am asking this question out of curiosity.

I recently encountered this well-known paper on (published in 2009):
the hardness_of_Euclidean_kmeans

The paper showed that the previous NP-hardness result (link) for the Euclidean k-means (discovered in 2004 and a preliminary version appeared in 1999) was wrong. Note that after around 5 years somebody pointed out that the previously known result was incorrect.
They also mention that many of the well-known papers (like kmeans++ paper) cited the incorrect hardness result till then.

Even when I read a paper, I find some minor mistakes. However, they are easily fixable and do not change the main result very much.

I want to ask if there had been any fundamental or highly cited paper, which was later found to be incorrect, and due to which the entire understanding of the field changed.

Edit: After reading some of the answers, I want to point out another issue that, why these incorrect papers are not updated after being pointed out wrong. I mean some kind of notice must be provided by the governing body who shares the link. In my case (for the example I gave above), it took me two years to figure out why people in the year 2002 were designing the PTAS for the k-means problem(for fixed k) if the result of hardness came later in the year 2009. It can be quite frustrating for a person who is not familiar with that field.

9 Answers

Actually in his famous paper "On Computable Numbers, with an Application to the Entscheidungsproblem.", Turing made some errors, but in 1937 (one year after the publication) he corrected them.

Answered by greps on October 30, 2021

There are a couple of interesting ones from PL Theory, where the stated results weren't wrong, per-say, but were widely interpreted to apply more broadly than they really did, or where clever "workarounds" were found later.

The two that stand out to me are:

  1. A self-normalizer for System-$F_omega$. The proven result was that you couldn't write an interpreter for $F_omega$ in $F_omega$, due to a Godel-incompleteness style argument. But It was later shown that this only applies to untyped representations of $F_omega$. A self-interpreter is possible if you use a typed representation of terms, as was shown later.

  2. Type-preserving continuation-passing style transformation of dependent types is impossible. This was shown to be impossible. But later, it was discovered that it was only impossible for their particular style of translation (the "double negation" translation). It was later shown that type-preserving CPS transformation is not not possible.

Answered by jmite on October 30, 2021

I remember that the following was mentioned in a lecture on compiler construction I attended as a student.

The semi-unification problem (SUP) in the theory of programming languages is equivalent to type inference for polymorphic recursion. Kfoury, Tiuryn and Urzyczyn proved that problem is undecidable in "The Undecidability of the Semi-unification Problem", Information and Computation Volume 102(1), pp. 83-101, 1993.

As they note,

In this paper, we show that if the signature $Sigma$ contains at least one function symbol of arity $ge 2$, then SUP is undecidable. [...] We have to admit, to our embarrassment, that among the many erroneous claims announcing the decidability of SUP, there was also ours [10].

The quoted reference [10] is "A Proper Extension of ML with an Effective Type-Assignment." POPL 1988: 58-69.

In this case, apparently several flawed decidability proofs were circulated, or possibly also published, before those authors finally got it right.

Answered by Hermann Gruber on October 30, 2021

There is some sort of "meta-flaw" pointed out by the following paper:

SOS is not obviously automatizable, even approximately. R. O'Donnell. ITCS '17.

Roughly speaking, it turns out that constant degree Sum-of-Square proofs cannot be generally approximated via current methods in polynomial time, contrary to the popular belief within the large body of SoS literature.

Answered by Mahdi Cheraghchi on October 30, 2021

The famous paper

C. Papadimitriou and S. Vempala, On the Approximability of the Traveling Salesman Problem, in Proc. 32nd ACM STOC (2000), pp. 126–133, 2000

that claimed an inapproximability of 41/40 for symmetric TSP and 129/128 for asymmetric TSP had a flaw, which was fixed in the journal version 6 years later (claiming worse ratios of 117/116 and 220/219):

Papadimitriou, C.H., Vempala, S. On The Approximability Of The Traveling Salesman Problem. Combinatorica 26, 101–120 (2006). https://doi.org/10.1007/s00493-006-0008-z

Answered by Mahdi Cheraghchi on October 30, 2021

A paper in STOC 1994 claimed a poly-time constant-factor approximation algorithm for finding balanced separators and some related problems, but the (incomplete) proofs in that paper are now considered flawed (e.g., see [2]).

[1] Chung and Yau. A near optimal algorithm for edge separators. In Proceedings of the 26th Annual ACM Symposium on Theory of Computing, pages 1–8, 1994.

[2] Shmoys. 1996. Cut problems and their application to divide-and-conquer. Approximation algorithms for NP-hard problems. PWS Publishing Co., USA, 192–235.

Answered by Neal Young on October 30, 2021

I am not sure this counts as "fundamental", but the following paper: L. Fortnow. The Complexity of Perfect Zero-Knowledge. Advances in Computing Research (ed. S. Micali) Vol. 18 (1989) had a flaw pointed out here: http://www.wisdom.weizmann.ac.il/~/oded/PSX/gop.pdf

Also, László Babai announced that he had fixed the error in his landmark graph isomorphism algorithm, as stated here: https://www.quantamagazine.org/graph-isomorphism-vanquished-again-20170114/

Answered by Avi Tal on October 30, 2021

The very influential Karp, Vazirani, Vazirani paper on online bipartite matching turned out to have a mistake in one lemma (see here for details) that was only discovered close to two decades after the paper was first published. However, the mistake was indeed a fixable one.

Answered by gov on October 30, 2021

One example is the claimed proof of the Gilbert-Pollak conjecture on the Steiner ratio, which appears in FOCS'90, and Algorithmica. The conjecture is now considered open.

Another examples include a sequence of algorithms on graph embedding.

There are more examples, but without similarly published refutations as above, it is difficulty to list due to the subtle nature.

Answered by Yixin Cao on October 30, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP