Let's #TalkConcurrency with Carl Hewitt

The last in our trio of concurrency one-on-one interviews is Carl Hewitt; the designer of logic programming language Planner. Also known for Actor models, Carl does a fantastic job of describing how he and others came up with Planner, and where he feels concurrency will go from here.

About Carl Hewitt

Carl has been highly influential in the development of logic, functional and concurrent programming, with his most recognisable work being the Planner (logic programming language) as well as his work on the Actor model of computation.

During his educational years, Carl was at MIT (Massachusetts Institute of Technology), which is famed for its work with science, computing and engineering. Here, he gained his PhD in applied mathematics in 1971 and then continued to work for the Department of Electrical Engineering and Computer Science until 2000 when he became emeritus.

Below is an edited transcript of Carl’s #TalkConcurrency video…

[Transcript]

Carl: My name is Carl Hewitt. I originally got started at the MIT Artificial Intelligence Laboratory with professors Marvin Minsky, Seymour Papert, and Mike Patterson, graduate students Terry Winograd and Gerry Sussman and Eugene Charniak, and all the hackers. We were very excited about the field, and I needed a thesis topic. I went to talk to Marvin about my bad ideas and he would say, “Well, I’ve always been interested in plans, assertions, and goals”. Then I went to my office to work some more. When I went back to talk more about my bad ideas, he said, "I’m kind of interested in plans, and assertions, and goals”. Then it occurred to me that, by golly, I can make a programming language based on plans, assertions and goals which would be much more high level than LISP. So I designed Planner, which was used by Terry Winograd in his famous blocks world SHRDLU demo. We thought that we had really made some progress until we discovered all the things that Planner couldn’t do because of the PDP-10 was such a tiny sequential machine.

A number of ideas were floating around for new computational models. For example, there were Petri nets and capability systems. I knew that Planner was inadequate and was working on extensions to add concurrency. In November 1972, Alan Kay gave a seminar at MIT about his new Smalltalk 71 programming language, which made use of message passing. To somehow unify all these disparate models of computation, we had a eureka moment that, "We can unify all of these things into one concept, namely, that of an Actor. Actors can do all of digital computation. There’s just one fundamental abstraction, one fundamental primitive idea. If Actors can be done right, it’s going to be enormously important because it’ll enable integration between large numbers of machines to communicate with each other as well as providing enormous concurrency on a single machine.

It has taken an enormous amount of time to figure out precisely what an Actor is and how to make Actors performant. Now that we can create the hardware to make these gazillions of Actors perform, we can create Scalable Intelligent Systems for the first time in history. Creating such systems was the dream that attracted us to the MIT Al Lab. It was the dream of Marvin Minsky, Allen Newell, Herbert Simon, John McCarthy. Unfortunately, they didn’t live to see it.

The funny thing is, they thought the task was to make an artificial human. Turing proposed this in his Turing test. We can’t do that for quite a while, but we can make Scalable Intelligent Systems for doing things like pain management, and a whole bunch of other things, that are scalable in the sense that they’re not bounded in any important dimensions. There are no hard barriers to improvement.

For the first time in history, using massive concurrency and things like Actors, we can create Scalable Intelligent Systems for important human endeavours. It’s been a long journey of developing these kinds of systems. Developing technology for massive inconsistency robust ontologies was necessary because Intelligent Systems will have vast amounts of inconsistent information. A tiny example from history is that a photon is both a wave and not a wave. It’s both a particle and not a particle. That’s an inherent inconsistency that the physicists have learned to live with through quantum mechanics.

Scalable Intelligent System ontologies are going to be chock full of inconsistencies. Analogous the physicists with their quantum mechanics, we have invented technology for dealing with pervasive inconsistencies. The only way to make Scalable Intelligent Systems performant is through massive concurrency using many cores on a chip. Modularity comes from each Actor that keeps its own local state and processes messages concurrently along with the zillions of other Actors on the same chip.

Tens of thousands of Actor cores on one chip could be possible using carbon nanotubes and/or nanoscale vacuum-channel transistors. China’s Minister of Science has projected that there’s going to be a revolution in Intelligent Systems by 2025. The challenge is: who’s going to do it? I think that China is fully capable of doing it: they have the commitment from the top, engineering talent, and a technology supply base. I don’t see any reason they can’t do it. Who else is going to do it? We wonder. Thank you.

[00:05:11] [END OF AUDIO]

If you want to learn more about EE380 colloquium Scalable Intelligent Systems Build and Deploy by 2025, check out more of Carl’s work at the Stanford University website.

Permalink

The Erlang Rationale

It's been a very long time since the last post.

So there have come up questions about the Erlang Rationale. I wrote this over 10 years ago and while it has come up in some presentations it has never been published in any way. Until now that is.

It is an attempt to explain why things look like they do in Erlang and our thinking behind many of the properties and features of the language. There is also some descriptions of part of the system which today seem to lack description, for example the i/o system and groups. The Rationale mainly deals with the core parts of the language and the older parts of the libraries and not OTP.

It can be found here The Erlang Rationale

The rationale is not finished and I am adding to it and expanding/improving what is already there. When there has been a significant enough change I will release a new version. However, someone suggested I could release new bits as they come and get comments and suggestions. This seems like a good idea and I will try it.

EDIT: I saw there was an old blog post as well. I have updated the link in it so it works.

Permalink

The Erlang Rationale

A while back I released the Erlang Rationale which is an attempt to explain why things look like they do in Erlang and our thinking behind many of the properties and features of the language. There is also some descriptions of part of the system which today seem to lack description, for example the i/o system and groups. The Rationale mainly deals with the core parts of the language and the older parts of the libraries and not OTP.

It can be found here The Erlang Rationale

The rationale is not finished and I am adding to it and expanding/improving what is already there. When there has been a significant enough change I will release a new version. However, someone suggested I could release new bits as they come and get comments and suggestions. This seems like a good idea and i will try it.

Permalink

Let's #TalkConcurrency with Joe Armstrong

We launched our #TalkConcurrency campaign last week with a fantastic interview with one of the founding fathers of concurrency; Sir Tony Hoare. This week we continue with the co-creator of Erlang, Joe Armstrong, as he walks us through his experiences with concurrency since 1985.

In the mid 80s, whilst working at the Ericsson Computer Science laboratory, Joe and his colleagues were looking for an approach to developing fault-tolerant and scalable systems. This resulted in the Erlang style concurrency as we know it today. During our visit to Cambridge University’s Department of Computer Science, we asked Joe, Tony, and Carl to discuss their experiences of concurrent systems and the future direction of concurrent software development.

The panel discussion will follow, but for now, here is an insightful discussion about how concurrency models have developed, and where they will be heading in the future with Joe Armstrong.

About Joe Armstrong

Joe made his name by co-creating Erlang alongside Robert Virding and Mike Williams in the 1980s at the Ericsson Computer Science Labs. Before that, he was debugging programs in exchange for beer whilst studying at University College London. He later received a Ph. D. in computer science from the Royal Institute of Technology (KTH) in Stockholm, Sweden in 2003.

Joe is the author of a number of key books on the topic of Erlang and beyond this including Concurrent Programming in Erlang, Programming Erlang: Software for a Concurrent World and Coders At Work.

You can read and watch more about Joe’s Erlang journey via our #OpenErlang campaign.

[Transcript]

Joe Armstrong: My name is Joe Armstrong. I’m here to tell you about a style of concurrent programming that we evolved from about 1985.

My introduction to this was when I started working at Ericsson and I got the problem of trying to figure out how to build a fault-tolerant system. At the time, Ericsson built large telephone exchanges that had hundreds of thousands of users, and a key requirement in building these systems was that they should never go down. In other words, they had to be completely fault-tolerant.

There was actually a fairly long history of this. Ericsson started building systems like this in about 1974, in the mid ‘70s. By the time I came along, which was around about 1985, they were a world-leading manufacturer of large fault-tolerant systems. I got a job in the computer science lab to see how we could program these systems in the future.

Initially, I wasn’t really interested in concurrency as such, I was interested in how you make fault-tolerant systems. A characteristic of these systems were that they handle hundreds of thousands of telephone calls at the same time. If you imagine a system that has 100,000 subscribers talking to each other, you could view this as being 50,000 pairs of people talking to each other.

Obviously, there is concurrency in the way the problem is set up. If we have 100,000 people using a telephone exchange, we have 100,000 parallel activities going on. The natural way to model this is with 100,000 processes grouped into pairs. That’s 100,000 people talking, it’s 50,000 pairs of two people. It seemed a natural way to describe this.

There was also a tradition of doing this using multiple processes. The system that I first looked at or the one that was being used in Ericsson consisted of two processors. One with an active processor that was doing all the work, the second was a standby processor that immediately took over if the first processor failed. This was the starting point. That was back in 1985. We were in a computer science lab and I started trying to describe this in a number of different programming languages. There was a multi-programming language project to try and model this in different languages.

Sooner or later, I stumbled upon Smalltalk. The way that Smalltalk described things in terms of objects and messages seemed very good, but it wasn’t a true type of concurrency I was interested in. The problem with Smalltalk was that if things failed, there wasn’t really a good failure model, and it wasn’t really concurrent. I went over to try to model failure.

Actually, through some accidental circumstances, I was introduced to Prolog, and I thought I could model all of this in Prolog. Prolog had a very bad failure model. If a computation fails in Prolog, you just get a saying, “No.” Not very good. I slowly modified this. Over a period of three or four years, from about 1985 to 1989, we developed this style of programming which became this language called Erlang. During the time, this project grew from myself to including Robert Virding and Mike Williams. We evolved this style of programming that is now called Erlang.

In 1990, I think it was a sort of hallelujah moment, we went to a conference in Bournemouth and it was about how to program distributed systems. At the time, everybody was building tightly-coupled distributed systems. It was rather embarrassing because, at the end of each talk, we took turns in sticking up our hand and asking the embarrassing question, “What happens if one of the nodes fail?”

These people would say, “Well, we would just assume the nodes aren’t going to fail.” The answer to the question was, “Well, the whole system doesn’t work.” We would shrug our shoulders and say, “Well, yet I know a system that doesn’t work.” We were in this rather strange position of thinking, “Hang on. The rest of the world is completely wrong and we are right. We’re doing it the right way.”

I had always viewed failure as being central to a problem. We cannot assume when we’re building a big system that the individual nodes will not fail. I had viewed building systems as building them from a lot of independent components, and that any one of those components could fail at any point in time and we have to live with that.

Once you’ve split the world into parallel components, the only way they can talk to each other is through sending messages. This is almost a biological, a physical model of the world. When a group of people sit and talk to each other, you can view them as having independent models in their heads and they talk to each other through language. Language is messages and what’s in their brain is a state machine. This seemed a very natural way of thinking.

I’m also minded to think that the original Von Neumann and Turing always thought that computation to be viewed as a biological process of communicating machines. It seemed a very natural way to model things. In fact, it seems rather strange recording a set of talks about the origin of concurrency because the way the whole world works is concurrent. If we look at the internet, it’s billions of devices all connected together, all using message passing, and all with private memories. The Internet of Things and the Web consists of loads of nodes with private memory communicating through message passing.

To me, it is very strange that that model breaks down when you get to individual applications inside a computer. It seems very strange to not have concurrency. This is a funny state of affairs! In the world of programming, the vast majority of all programming languages make it very easy to write sequential programs and very difficult to write concurrent programs. My goal was to make it very easy to write concurrent programs. Consequently, it might be a bit more difficult to write sequential programs. Of course, when multi-cores came along, what we had done then mapped very well onto parallel programs.

Up to that point, concurrent programs were actually sequential programs that were interleaved rather quickly in an operating system. When multi-cores came along, the possibility emerged to execute those programs in parallel, so we were immediately able to take advantage of parallel cores. In fact, that’s probably the reason why Erlang has spread in the last 15 to 20 years because of the way it scales naturally onto massive multi-core computers.

What were the key points that we learned perhaps in the early days of Erlang? This is the period from about 1985 to 1989, this is a period when we did a lot of experiments. I think what we tried to do was structure the system into primitives and libraries. We had to choose primitives that made it easy to write the libraries. What a normal program will do is use the libraries. For example, there are many very, very difficult problems like leadership election or maintaining write append buffers between processes. They turn out to be very difficult to program, so they’re done in the libraries, they’re not done in the kernel of the language.

The work in the late '80s was to identify which primitives we had to have in the language in order to build the libraries. This was not at all obvious. One primitive that came– It’s actually an idea of Mike Williams, was the notion of a link. That extended our handling across remote nodes. The idea was you could link two processes together. If you’ve got one process here, another process here, you could put a link between them. The meaning of the link was that if one of the processes failed, the other process would receive a message saying that the remote process had failed.

This link mechanism was the enabling factor that allowed us to write a lot of libraries on top of that. What users of Erlang or Elixir will see, I think it’s called supervision trees where we link together collections of processes and build them into trees, but the underlying mechanism is the link mechanism. That’s I think one of the things we learned there was how to build these primitives. In fact, we tried lots of different primitives. We tried to implement buffers between processes and things like that. It turned out that these are very difficult to implement in a virtual machine, so we stripped down the virtual machine to the primitives that we needed.

I think one of the things we learned is that in order to implement a concurrent language, you have to do three things at a very primitive level. Message passing should be extremely fast, context switching should be extremely fast, and there should be a built-in error processing mechanism. Without that, it’s really impossible to build languages like this. That’s what’s built into the virtual machine and gets inherited by all languages that are implemented on top of this virtual machine.

[00:10:16] [END OF AUDIO]

Permalink

Erlang/OTP 21

En junio de 2016 fue la última vez que hablé sobre una liberación de Erlang y desde ese momento (hace ya 2 años y medio) ha llovido bastante. El equipo de Elixir ha ayudado a Ericsson con las últimas liberaciones así que, ¿qué tal si vemos qué mejoras han integrado en OTP 21?

Permalink

Let's #TalkConcurrency with Sir Tony Hoare

Back in November, we took a trip to Cambridge University with Francesco Cesarini to arrange a panel discussion revolving around concurrency. For the panel, we enlisted the help from the best; Sir Tony Hoare, Joe Armstrong, and Carl Hewitt! All pioneers in the concurrency field.

What followed was a fantastic discussion about how concurrency models have developed, and where they will be heading in the future.

We also interviewed Sir Tony Hoare personally about his work with concurrency that has spanned across 50+ years. Take a look for yourself!

About Sir Tony Hoare

Computer scientist Sir Tony Hoare FRS FREng is well-known for developing the sorting algorithm Quicksort in 1959/1960, which is a systematic method for placing the elements of an array in order. It is the most commonly used algorithm for sorting due to its speed, especially when compared with its related competitors. Quicksort is a comparison sort and sometimes referred to as “partition-exchange sort”.

In addition to Quicksort, Sir Tony has a list of other achievements under his belt; Quickselect, Hoare logic, Communicating Sequential Processes and Structured programming to name a few. In addition, he has won numerous awards and honours based on his brilliant work within the Computer Science field! This includes SIX Honourary Doctorates over the years and his Knighthood back in 2000. He is now the principal researcher at Microsoft Research in Cambridge and Emeritus Professor in the Department of Computer Science at Oxford University.

[Transcript]

Tony Hoare: Hi, I’m Tony Hoare. I’m an honorary member of the Cambridge University Computing Laboratory, where this interview is being recorded.

I’d like to tell you something about how I got interested in concurrency as a subject from our research, and what the influences were in the development of my thinking about this subject.

In 1960, I got my first job as a programmer in a small computer manufacturer and led a small team towards the implementation of a programming language - a new programming language at that time called ALGOL 60. The project was successfully delivered. After that, the company decided that it would need an operating system for its new computer, and I was put in charge of its design and its development.

Unfortunately, the operating system, two years later, turned out to be undeliverable. It was just terribly slow. I put the cause of my failure down to the fact that I didn’t understand concurrency. This was one of the motives which led me, in 1968, to change my profession and join a university as professor of computation at the Queen’s University in Belfast.

One of the objectives of my research I made was the exploration of what concurrency really meant and how to tame it. In 1965, I had met Ole-Johan Dahl and Kristen Nygaard, who were the designers of a language called Simula 67, which had an enormous influence on the propagation of ideas of object-oriented programming, and it impressed me greatly. They had a concept of timing which was implemented as a simulated time train in the way that is completely standard nowadays. That gives a framework within which one could explore the meaning of real concurrency.

After I had joined the Queen’s University Belfast, I teamed up with a colleague, Jim Welsh, to design a new version of Pascal; a language which is then current for implementation of compilers and other software, to design a new version which would import some part of the class structure of Simula 67. It was called Pascal Plus. Jim Welsh wrote its compiler and tested it and delivered it. It’s rather a splendid exercise because he only ever found one fault. He found one fault in the compiler that he’d written. It was a no-reference fault. Otherwise, he had implemented the language completely from its specification without testing it, any part of the implementation, before putting it all together and compiling it.

The only other Pascal compiler that was in existence at that time produced a running compiler, which we used. I used that compiler to implement an idea which was based on the Simula class concept called the monitor. I published an article about it. The monitor’s essentially a single object which would accept method calls with an object with many available methods and would accept calls from many concurrent callers. It was protected automatically by an exclusion mechanism so that the calls could not be simultaneous.

Using that language, I wrote, with the help of a colleague Mike McKeag, a thing which we called a structured operating system, and gave a course on the structured operating system at a summer school in Marktoberdorf in Bavaria.

The director of the summer school was one Edsger Dijkstra, well known worldwide, and a very good friend of mine. He told me after I’d given the course of lectures, that he thought maybe I had a good idea here, which from Edsger Dijkstra, was high praise. That he said the notation is terrible, we need to design a different notation. When we had a free afternoon, we went and sat in our hotel garden and talked about the design of what eventually turned out to be CSP, where there were many, many objects, processes running concurrently and communicating with each other by message passing rather than just by method calls. That was crucial.

After this discussion with Edsger, I worked continuously. I made this my most interesting research project. I worked on it for three years before the publication of an article describing my ideas under the title, Communicating Sequential Processes, published in the communications in the ACM. It had taken me three years to write that article, and one of the years was full-time research. Perhaps it is was a just reward and that it was a very widely referenced article, and still is gathering references, I believe about 2,000 references a year. I still felt I needed to have an exact definition of what the semantics of this language was sufficiently complete and precise that I could use it for specifying and proving the correctness of programs written in this language.

In 1977, I moved to Oxford University where I met Dana Scott and learnt something about denotational semantics as a way of describing the meaning of programming languages. I slightly adapted his semantics into a trace semantics and worked out a really remarkably simple semantics for CSP of a what I might call a conditional correctness variety. It was a very good approximation, but it didn’t allow the proof of termination. It assumed, and it didn’t properly model deadlock as a phenomenon which could not be recovered from. Fortunately, I had two very good students at Oxford, mathematics graduates, Bill Roscoe and Steve Brookes. I got them to help me with working at the theory of this language, which they did very successfully and republished an article in the– Sorry, I’ve forgotten where. Both Roscoe and Brookes have worked, like I have, for a long period on pursuing their research into the theory of CSP. Bill Roscoe, in particular, has published two substantial monographs on the subject and has built it into an automatic tool which will help in getting correct programs written in a CSP language. He extended the language very usefully, and he implemented it many times, improved it enormously through a period of about 35 years of development. This is now, I think, one of the fastest proof-checkers for concurrent programs that exist in the world today. The world today is, I think, where I will leave this story.

Thank you very much.

[00:10:13] [END OF AUDIO]

If you like this, you may like Francesco’s Retrospective; detailing the last 20 years of open source Erlang! Take a journey through the Ericsson Labs up to present day.

Permalink

Erlang/OTP 21

En junio de 2016 fue la última vez que hablé sobre una liberación de Erlang y desde ese momento (hace ya 2 años y medio) ha llovido bastante. El equipo de Elixir ha ayudado a Ericsson con las últimas liberaciones así que, ¿qué tal si vemos qué mejoras han integrado en OTP 21?

Permalink

Copyright © 2016, Planet Erlang. No rights reserved.
Planet Erlang is maintained by Proctor.