Robin Milner, in full Arthur John Robin Gorell Milner, (born Jan. 13, 1934, Yealmpton, Devon, Eng.—died March 20, 2010, Cambridge, Cambridgeshire), English computer scientist and winner of the 1991 A.M. Turing Award, the highest honour in computer science, for his work with automatic theorem provers, the ML computer programming language, and a general theory of concurrency.
Milner attended Eton College and won a scholarship to attend the University of Cambridge in 1952, but he had to postpone his course work while he served at the Suez Canal with the Royal Engineers of the British army for the next two years. Milner entered Cambridge in 1954 and graduated with a bachelor’s degree in mathematics in 1957. He was first exposed to computing in the summer of 1956 with a short course in programming in which he used the school’s EDSAC computer. After that, Milner moved to London, where he held various jobs, including a post teaching mathematics at Marylebone Grammar School (1959–60), before he became a computer programmer and developed compilers at Ferranti Ltd. (Ferranti produced the first commercial computer, the Ferranti Mark I, in 1951.)
His first encounter with computing, as an undergraduate in 1956, was uninspiring. He took a 10-day course in programming on Maurice Wilkes’ EDSAC (Electronic Delay Storage Automatic Calculator). His reaction was characteristically decisive: “Programming was not a very beautiful thing. I resolved I would never go near a computer in my life.” He found the activity of “writing one instruction after the other” to be “inelegant” and “arbitrary”.
Milner never studied for a PhD. After one year as a school teacher, and two years of national service in Suez, his next engagement with computers came at the computer manufacturer Ferranti, where he “made sure that all the new machines they were selling were actually working,” and also wrote parts of a compiler. In 1963 he moved to academia with a passionate belief that a deeper understanding of programming could form the basis for a more elegant practice.
Milner went to Oxford to hear lectures from Christopher Strachey and Dana Scott, who had been thinking about computable functionals, an abstract mathematical model for the meaning (“denotational semantics”) of a program, and about a logic of computable functionals (LCF) for reasoning about “denotation” objects. Milner said that getting interested in program verification and semantics was the real start of his research career. At Swansea he wrote his first automatic theorem prover, but was still “shattered with the difficulty of doing anything interesting”.
He was then invited to Stanford, where he spent two years (1971-1973) as a research associate with John McCarthy’s Artificial Intelligence Project. Milner was more interested in how human intelligence might be “amplified” than in what an artificial intelligence might achieve independently. Milner thought that machines might help humans apply the Scott-Strachey approach to denotational semantics to practical examples. The calculations required were laborious and error-prone, but many were mechanical and amenable to computerization.
Milner was still dissatisfied with the difficulty of using the then-existing programming languages – he was programming in LISP – and with the possibility of error. How could one be sure that the system could not introduce an inconsistency, and hence be useless because it could prove anything? When he moved to Edinburgh, in 1974, he embarked on ambitious program to develop and implement a practical programming language that drew on LISP, but was also much influenced by Peter Landin’s ISWIM (If you See What I Mean), an ‘abstract’ language that was never really implemented but which has clear mathematical foundations and operational semantics based on Landin’s ‘SECD’ abstract machine.
Milner’s new language was designed as a metalanguage (hence its name, ML) for the implementation of a new proof-assistant called Edinburgh LCF. It was a robust and efficient practical tool well-adapted for this task, and was defined with the mathematical clarity exemplified by Landin’s work.
ML was way ahead of its time. It is built on clean and well-articulated mathematical ideas, teased apart so that they can be studied independently and relatively easily remixed and reused. ML has influenced many practical languages, including Java, Scala, and Microsoft’s F#. Indeed, no serious language designer should ignore this example of good design.