Turing tarpit

From Esolang
(Redirected from Turing Tarpit)
Jump to navigation Jump to search

Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy. ~ A. Perlis, 1982

A Turing tarpit is a Turing-complete language over a machine with a minimal number of features. The creation and study of Turing tarpits is one of the oldest topics in computer science.

History

Turing tarpits were the original object of study for computer scientists. The two oldest tarpits are a fragment of a computational calculus documented by Alonzo Church in 1932 and published in 1936, and programs over a theoretical machine published by his student Alan Turing in 1936; today, they are known as lambda calculus and Turing machines respectively. Church and Turing showed several universality and completeness results while also validating the idea that some problems, particularly the halting problem, are not computable. It was also known that the SKI basis in combinatory logic, which had been identified by Moses Schönfinkel in 1920 and published in 1924, could encode lambda calculus and thus inherited a notion of Turing-completeness.

Over the next decade, several more tarpits were identified, particularly Post machines by Emil Post in 1946, as computability theory blossomed.

The 1950s presented a shift as computing became commercially available with the delivery of the wikipedia:UNIVAC I to the United States Census Bureau. One member of the UNIVAC project, Grace Hopper, pushed for legible programming environments for commercial programmers. This push led to the development of parser and compiler theory and an overall shift away from computability theory as the core of computer-science research. From 1955 to 1959, Hopper led a team to create FLOW-MATIC, which would go on to directly influence COBOL. Starting in 1961 by Itiroo Sakai, variants of the wikipedia:CYK algorithm were known for parsing context-free languages. In 1966 Corrado Böhm and Giuseppe Jacopini proved the wikipedia:structured program theorem, establishing flowchart languages as a readable basis for discussing algorithms. Overall, from 1949 to 1959, computing transitioned from writing programs to writing computer-readable explanatory documents, and throughout the 1960s, the theoretical foundations for parsing and compiling documents were the main focus of research.

Nonetheless there was still substantial academic interest in Turing tarpits through the 1950s and early 1960s, particularly among computability theorists. In 1961, Hao Wang conjectured several questions about Wang tiles, and in 1966 Robert Berger responded by showing that it is Turing-complete whether a set of 2D Wang tiles tessellates the plane. In 1964, Marvin Minsky showed that Post tag systems admitted universal behavior, satisfying a 1943 conjecture by Post. Also in 1964, Böhm published P′′, a spiritual predecessor to Brainfuck.

Near the middle of the 1960's, however, the bubble of optimism surrounding them burst. In his 1987 paper The Busy Beaver Game and the Meaning of Life, Allen Brady relates a private conversation with John McCarthy (inventor of LISP) circa 1964 in which Prof. McCarthy states:

We thought if we were to find the smallest universal machine then we could learn a great deal about computability -- of course that wouldn't be so!

In the final chapter, Very Simple Bases for Computability, of his 1967 book Computation: Finite and Infinite Machines, Marvin Minsky writes:

The reader is welcome to enter the competition [to design the smallest universal Turing machine ...] although the reader should understand clearly that the question is an intensely tricky puzzle and has essentially no serious mathematical interest.

By the 1980s, Turing machines were considered a distinct topic of study from mainstream computer science. In the same 1982 article where he coined the phrase, "Turing tar-pit," Alan Perlis opined:

What is the difference between a Turing machine and the modern computer? It's the same as that between Hillary's ascent of Everest and the establishment of a Hilton hotel on its peak.

Although enthusiasm in them has declined, Turing tarpits do remain valuable as an academic tool. They can be pedagogically useful in illustrating various aspects of computability theory and the principles of programming languages; for example, Böhm's insights about structured programming were a direct result of their work on Turing machines (Berendregt 2018). They also have applications in wikipedia:algorithmic information theory, where simple machines are easier to analyze.

The pursuit of Turing tarpits in the esoteric programming language community, however, seems to be essentially for its own sake. Whether this pursuit is regarded by its practitioners as a pastime, an art form, informal research, or something else entirely - it seems safe to say that they must enjoy the "intensely tricky puzzle" to which Minsky referred.

Linguistic elements

In order to find an arbitrarily simple machine, one must define "simplicity" - i.e. some metric that must be minimized. This metric must be based on some measurable property of the language, and it usually comes down to counting certain linguistic elements in descriptions of machines. There are several linguistic elements to pick from, and the borders between them are not always crisp, making this definition a challenge.

For universal Turing machines, the product between states and symbols is often used. Claude Shannon showed that either the number of states or the number of symbols can be reduced to as small as 2 -- but only at the cost of increasing the other. Minsky and Bobrow showed (through exhaustion) that no Turing machine with only 2 states and 2 symbols is universal.

States are not always easy or natural to measure in a programming language, though, and for imperative languages, instructions can be used instead. Functional and rewriting languages might count reductions. We can use the (slightly vague) term operation to refer to any of these, but it should be understood that these are not generally comparable measures.

Survey

Here are rough metrics on some mostly-esoteric programming languages which are Turing tarpits:

  • +-): 3 instructions, 3 symbols
  • ///: 3 types of actual instructions, infinitely many symbols (only 2 symbols are needed for Turing-completeness)
  • 123: 3 operations, 3 symbols
  • Autopsy: 2 instructions, 2 symbols
  • Binary combinatory logic: 2 operations, 2 symbols
  • BinaryLanguage: 14 instructions, 14 symbols
  • BitChanger: 4 instructions, 4 symbols
  • Bitwise Cyclic Tag: 2 instructions, 2 symbols
  • Brainfuck: 8 instructions, 8 symbols
  • iag: 3 instructions, >3 operations, 3 symbols
  • I/D machine: 2 operations, 2 symbols
  • I like frog: 12 instructions, 10 symbols (technically 4 if you count "i ", "like ", "frog " and a newline as their own symbols) (see "i like ternary" on the page for a language with 4 symbols)
  • Iota: 2 operations, 2 symbols
  • Jot: 3 operations, 2 symbols
  • LAST: 4 instructions, 4 symbols or 2 symbols for LAST-B (binary representation)
  • OISC: 1 instruction, 3 symbols (signed unary, including separator)
  • P'': 4 instructions, 4 symbols
  • Quinary Bueue: 4 instructions, 4 symbols
  • SHITS: 5 instructions (3 without I/O), 5 (3 without I/O) symbols
  • Tag: 1 instruction, infinite symbols
  • Tarski: 6 operations, 6 symbols
  • Thue: 1 operation, 4 or 5 symbols (minimum)
  • Ultimate bf instruction minimalization!: 2 operations, 2 symbols
  • Unary: 1 instruction, 1 symbol
  • Wierd: 6 operations, 2 symbols
  • X strike: 4 instructions, 3 symbols

See also

References

  • "Epigrams in Programming", Perlis 1982, in SIGPLAN (September 1982)
  • Brady, A. H. 1987. The Busy Beaver Game and the Meaning of Life. The Universal Turing Machine: A Half-Century Survey. pp 237-254. R. Herken, Ed. Springer-Verlag, NY. (ISBN 3-211-82637-8; ISBN 3-211-82628-9)
  • "Gems of Corrado Böhm", Berendregt 2018