Erlang OTP 21.2 is released

img src=

OTP 21.2

Erlang/OTP 21.2 is the second service release for the 21st major release with, improvements as well as a few features!




  • Public key methods: ssh-ed25519 and ssh-ed448 added. Requires OpenSSL 1.1.1 or later as cryptolib under the OTP application


  • ssl now uses active n internally to boost performance. Old active once behaviour can be restored by setting a application variable.

ERTS, Kernel:

  • New counters and atomics modules supplies access to highly efficient operations on mutable fixed word sized variables.
  • New module persistent_term!. Lookups are in constant time! No copying the terms!
  • New pollset made to handle sockets that use {active, true} or {active, N}. Polled by a normal scheduler!

  • No more ONESHOT mechanism overhead on fds! Only on Linux and BSD

For a full list of details see:

Pre built versions for Windows can be fetched here:

Online documentation can be browsed here:

The Erlang/OTP source can also be found at GitHub on the official Erlang repository, Here: OTP-21.2

Please report any new issues via Erlang/OTPs public issue tracker

We want to thank all of those who sent us patches, suggestions and bug
Thank you!
The Erlang/OTP Team at Ericsson


Twenty Years of Open Source Erlang

Erlang was released as Open Source Tuesday December 8th, 1998. Do you remember where you were that week? I was in Dallas (Texas); one of my many visits helping the Ericsson US branch set up an Erlang team working with the AXD301 switch. Waking up on Tuesday morning, I got the news.

The release was uneventful. There was no PR around the release, no build up or media coverage. Just a sparce website (handcrafted using vi). An email was sent to the Erlang mailing list, a post made the front page on slasdot, alongside a mention on comp.lang.functional (which Joe dutifully followed up on). There were no other marketing activities promoting the fact that Ericsson had released a huge open source project. My highlight that week was not the release of Erlang, but going to a Marky Ramone and the Intruders gig in a dive in downtown Dallas. Little did I know how Open Source Erlang would affect the tech industry, my career, and that of many others around me.

Getting it out of Ericsson

What made it all happen? Many of us wanted Erlang to be released as open source, for a variety of reasons. Some of my Ericsson colleagues wanted to leave their current positions, but continue building products with what they believed was a silver bullet. Others wanted to make the world a better place by making superior tools for fault tolerant and scalable systems available to the masses. For Ericsson management, a wider adoption of Erlang would mean a larger pool of talent to recruit from.

Jane Walerud was amongst us trying to sell Erlang outside of Ericsson and one of the few who at the time knew how to speak to management; she understood that the time of selling programming languages was over. Håkan Millroth, head of the Ericsson Software Architecture Lab suggested trying this new thing called “Open Source”. Jane, armed with an early version of The Cathedral and the Bazaar paper, convinced Ericsson management to release the source code for the Erlang VM, the standard libraries and parts of OTP.

Until Erlang was out, many did not believe it would happen. There was a fear that, at the last minute, Ericsson was going to pull the plug on the whole idea. Open Source, a term which had been coined a few months earlier, was a strange, scary new beast large corporations did not know how to handle. The concerns Ericsson had of sailing in uncharted territory, rightfully so, were many. To mitigate the risk of Erlang not being released, urban legend has it that our friend Richard O’Keefe, at the time working for the University of Otago in New Zealand, came to the rescue. Midnight comes earlier in the East, so as soon as the clocks struck midnight in New Zealand, the website went online for a few minutes. Just long enough for an anonymous user to download the very first Erlang release, ensuring its escape. When the download was confirmed, the website went offline again, only to reappear twelve hours later, at midnight Swedish time. I was in Dallas, fast asleep, so I can neither confirm nor deny if any of this actually happened. But as with every legend, I am sure there is a little bit of truth behind it.

The Dot Com Bubble Era

Adoption in first few years was sluggish. Despite that, the OTP team, lead by Kenneth Lundin was hard at work. In May 1999, Björn Gustavsson’s refactoring of the BEAM VM (Bogdan’s Erlang Abstract Machine) becomes the official replacement of the JAM (Joe’s Abstract Machine). Joe had left Ericsson a year earlier and the BEAM, whilst faster, needed that time to make it production ready.

I recall the excitement every time we found a new company using Erlang/OTP. Telia, the Swedish phone company, was working on a call center solution. And One2One - the UK mobile operator - had been initially using it for value added services, expanding its use to the core network. IdealX in Paris, did the first foray into messaging and XMPP. Vail System in Chicago and Motivity in Toronto were using it for auto dialler software. And Bluetail, of course, had many products helping internet service providers with scalability and resilience.

The use of Erlang within Ericsson’s core products continued to expand. This coincided with my move to London in 1999, where I increasingly came across the need for Erlang expertise within Ericsson. Erlang Solutions was born. Within a year of founding the company, I had customers in Sweden, Norway, Australia, Ireland, France, the US, and of course, the UK. In 2000, we got our first non Ericsson customer; training, mentorship and a code review for IdealX in Paris.

It was the Bluetail acquisition by Alteon Web Systems for $152 million (a few days later Alteon were acquired by Nortel), which sent the first ripples through the Erlang community. An Ericsson competitor developing Erlang products! And a generation of successful entrepreneurs who had the funds to get involved in many other startups; Synapse, Klarna and Tail-f being some of them.

Soon after the Bluetail success comes the dot com crash, the industry went into survival mode, and then later recovery, mode. The crash, however, did not affect academics who were moving full steam ahead. In 2002, Prof. John Hughes of Chalmers University managed to get the Erlang Workshop accredited by SIGPLAN and the ACM. We didn’t really know what this all meant, but were nonetheless, very proud of it. The ACM SIGPLAN Erlang Workshop in Pittsburgh (Pennsylvania) was the first accredited workshop. Here, a PhD student from Uppsala University named Richard Carlsson presents the Erlang version of try-catch to the world.

In September 2004, Kostis Sagonas, from Uppsala University hijacks the lightning talks at the ACM SIGPLAN Erlang workshop in Snowbird (Utah) and gives the first public demo of Dialyzer. He runs it on a code base from South African Teba Bank. It was the first of many amazing tools he and his students contributed to the ecosystem.

Erlang had for a long time been used to teach aspects of computer science in many Universities all over the world. This in turn lead to research, Master Thesis and PhD projects. The workshop provided a forum for academics to publish their results and validate them with industrial partners. Downloads from the site kept on increasing, as did adoption.

In 2003, Thomas Arts, program manager at the IT University of Gothenburg, invites me to teach an Erlang course to his undergraduate class. Prof. John Hughes, despite already knowing Erlang, wanted to learn about it from someone who had used it in production, so he joins the class. One morning, he shows up to class tired, having pulled an all nighter. He had developed the first version of Erlang QuickCheck, which he was dutifully using to test the course exercises. That was the start of Quviq and a commercial version of QuickCheck, a property based testing tool second to none. I ended up teaching at the IT University for a decade, with over 700 students attending the course.

Getting Into Messaging

During the dot com crash, Alexey Shchepin starts work on an XMPP based instant messaging server called ejabberd. After working on it for three years, he releases version 1.0 December 1st, 2005. Facebook Chat forks it, rolling out a chat service to 70M users. At around the same time, Brian Acton and Jan Koum founded WhatsApp, also based on a fork of Ejabberd. As forking Ejabberd was all the hype, MongooseIM did the same, becoming a generic platform for large scale messaging solutions.

In May 2006, RabbitMQ is born, as we find out that work was underway to define and implement a new pub/sub messaging standard called AMQP. RabbitMQ is today the backbone of tens of thousands of systems. By the end of the decade, Erlang had become the language of choice for many messaging solutions.

The Multi-Core Years

It was not only Universities innovating during the dot com recovery. In May of 2005, a multi-core version of the BEAM VM is released by the OTP team, proving that the Erlang concurrency and programming models are ideal for future multi-core architectures. Most of the excitement was on the Erlang mailing list, as not many had realised that the free lunch was over. We took Ejabberd, and just by compiling it to the latest version of Erlang, got a 280% increase in throughput when running it on a quad-core machine.

In May 2007, the original reel of the 1991 Erlang the Movie was anonymously leaked from a VHS cassette in an Ericsson safe and put on the site, eventually making its way to YouTube. Still no one has publically taken responsibility for this action. The world, however, finally understood the relief felt by those still under Ericsson NDA that none of the computer scientists featured in the film had given up their day jobs for a career in acting. The film got a sequel in 2013, when a hipster tries to give Erlang cool look. This time, the curpruit releasing it is identified as Chicago resident Garrett Smith.

In 2007, Programming Erlang by Joe Armstrong is published by the The Pragmatic Programmers. The following year, in June 2008, I held the first paper copy of Erlang Programming; a book Simon Thompson and I had spent 18 months writing. At the time, an O’Reilly book was the seal of approval that emerging programming languages needed, giving way to many other fantastic and diverse range of books in many languages.

The book launch party happened in conjunction with the first commercial Erlang conference, the Erlang eXchange in London June 2008. It was not the first event, as Bjarne Däcker, the former head of the Ericsson Computer Science Lab, had for almost a decade been running the yearly Erlang User Conference in Stockholm. But November in Sweden is cold, and the time had come to conquer the world. The Erlang eXchange gives way to the first Erlang Factory, taking place in March 2009 in Palo Alto (California). Much more exotic, though equally beautiful locations.

For the first time, the European Erlang community meet their American peers. We all got on like a house on fire, as you can imagine. At the conference, Tony Arcieri presents Reia, a Ruby flavoured version of Erlang running on the BEAM. Who said that a Ruby like syntax is a bad idea? Other presenters and attendees that year had stellar careers as entrepreneurs and leaders in the tech space.

An Erlang user in the US at the time was Tom Preston Werner. He was using it to scale the Ruby front-end of a social coding company called Github. In November of 2009, when in Stockholm for the Erlang User Conference, I introduced him and Scott Chacon to the OTP team. They spent an afternoon together, prompting the OTP team to move the development of Erlang to github, making it its main repository.

Conferences spread all over the world. Events have been held in Amsterdam, Bangalore, Berlin, Buenos Aires, Brussels, Chicago, (many places I can not spell in) China, Krakow, Los Angeles, Paris, Moscow, Mexico City, Milan, Munich, New York, Rome, San Francisco, St Andrews, Tel Aviv, Vancouver, Washington DC and many many other places.

The Cappuccino Years

In 2010, I teach my first graduate class at Oxford University. Erlang was picked for the Concurrency Oriented Programming course. It was also the year Bruce Tate’s Seven Languages in Seven Weeks was released. It was through this book where one of Rails’ core committers, José Valim, realized that Erlang was ahead of everyone in the concurrency race because it also tacked distribution.

In January 2011, the first commit in the Elixir repo happens. The results are presented the following year at the Krakow Erlang Factory, reaching version 1.0 in September 2014. Like all successful languages, he was trying to solve a problem, namely bringing the power of Erlang to wider communities, starting with Web.

The time was right. In January 2012, WhatsApp announce that by modifying FreeBSD and the BEAM, they achieved 2 million TCP/IP connections in a single VM and host. Their goal was to reduce operational costs, running a scalable service on a hardware footprint that was as small as possible. These results were applicable to many verticals, the Web being one of them.

The same month as the WhatsApp announcement, a group of companies pool together knowledge, time and resources to create the Industrial Erlang User Group. They worked with Ericsson to move Erlang from a derivative of the Open Source Mozilla Public License to the Apache Licence, contribute to the dirty scheduler, get bug tracking tool launched, fund a new site, launch Erlang Central, and worked together with an aim of setting up a foundation.

Elixir Comes of Age

In July 2014, Jim Freeze organises the first Elixir Conference in Austin (Texas). There were 106 attendees, including keynote speaker Dave Thomas’ dog. Chris Mccord presented Phoenix, rising from the ashes. Robert Virding and I are part of the lineup and I recall my message loud and clear: just because you know Ruby, don’t believe them when they tell you that learning Elixir is easy. Your challenge will be thinking concurrently.

The main idea behind Elixir is concurrency, and knowing how to deal with it is critical to the success of the project. A year later, in August 2015, Phoenix 1.0 was released. It had the same effect Rails had on Ruby, bringing people to Elixir. Now, you didn’t need to master concurrency to get it! Nerves came along soon after, moving Elixir away from being a language solely for the web.

At Elixir Conf, I spoke about the book I was co-authoring with Steve Vinoski, Designing For Scalability with Erlang/OTP. At the time, it was available as a beta release. Little did I know that I had to wait for June 2016 to hold to a paper copy. The last four chapters, which should have been a book on their own, ended up taking 1.5 years to write. The main lesson to others writing a book is that if your partner tells you “you are going to become a father”, you have 8 months to finish the book. The alternative is you ending up like me, attending the release party a few days before your second child is due. The book was dedicated to Alison, Peter and our baby bump. Baby bump was born in early July, bringing truth to the Erlang saying that “you do not truly understand concurrency until you have your second child”.

The Erlang Ecosystem

Throughout 2016, Elixir adoption kept on growing. Conference talks on Lisp Flavoured Erlang and Effene - two other languages on the BEAM - revealed they had code running in production. New experimental ports kept appearing on our radar; the era of a language was over. As with .net, containing C#, F#, Visual Basic and others or the JVM ecosystem encompassing Java, Scala, Clojure, Groovy, to mention but a few. The same happened with Erlang and the BEAM, prompting Bruce Tate to coin the term Erlang Ecosystem.

Alpaca, Clojerl, Efene, Elixir, Elm-beam,, Erlog, Erlua, Fez, Joxa, Lisp Flavoured Erlang and Reia, which alongside Erlang and Elixir, are opening an era of interaction and cooperation across languages. Together, we are stronger and can continue evolving!

In December of 2018, the paperwork for the Erlang Ecosystem Foundation was submitted, setting up a non profit whose goal is to foster the ecosystem. I am looking forward to more languages on the BEAM gaining in popularity, as we improve interoperability, common tools and libraries. And as the demand for scalable and fault tolerant systems increases, so does the influence of Erlang’s constructs and semantics in the new languages within and outside the ecosystem. I hope this will set the direction for the next 20 years as a new generation of technology leaders and entrepreneurs spreading their wings.

The Future

In 2018, at Code BEAM Stockholm conference discovering the Erlang Ecosystem (formerly known as Erlang User Conference), Johan Bevemyr from Cisco announces they ship two million devices per year with Erlang applications running on them. That blew the audience away, as it meant that 90% of all internet traffic went through routers and switches controlled by Erlang. Erlang powers Ericsson’s GPRS, 3, 4G/LTE and if recent job ads are anything to go by, their 5G networks. MQTT for IoT infrastructure through VerneMQ and EMQ, the most popular AMQP brokers. Erlang powers not only the internet and mobile data networks, it is the backbone of tens of thousands of distributed, fault tolerant systems. Switches billions of dollars each day through its financial switches and even more messages through its messaging solutions. You can’t make this stuff up!

These are just some of my personal highlights from the last 20 years. In it all, there is a realisation that we are far from done. Joe Armstrong, in 1995, told me Erlang will not be around forever. Some day, he said, something better will come along. Fast forward to December 2018, I am still waiting, with an open mind, for that prophecy to come true. Whatever it is, there is no doubt Erlang will be a heavy influence on it.

A big thank you to Joe, Mike and Robert for making that first phone call, and to Bjarne for enabling it. A shout out to Jane who, by getting it outside of Ericsson, ensured its survival. You all started something which has allowed me to meet, work and learn with amazing and talented people using a technology that we are all passionate about. It has given us a platform enabling many of us to drive the innovation forward for the next 20 years (at least)!


N2O/WebSocket for Standard ML

The N2O ECO Standard ML implementation.

This page contains the description of WebSocket and static HTTP server implementation and protocol stack for application development of top of it that conforms to N2O ECO specification.

As you may know there was no WebSocket implementation for Standard ML until now. Here is the first WebSocket server with switch to static HTML if needed for serving host pages of WebSocket client. The demo echo application is presented as N2O protocol for Standard ML languages at final.

$ wscat -c ws://
connected (press CTRL+C to quit)
> helo
< helo

We also updated N2O ECO site with additional o1 (Standard ML) implementation to existent o3 (Haskell) and o7 (Erlang) implementations.

N2O has two default transports: WebSocket and MQTT and Standard ML version provides WebSocket/HTTP server along with its own vision of typing N2O tract as it differs from Haskell version.

TCP Server

In Standard ML one book you will need for implementing the WebSocket server is Unix System Programming with Standard ML.

In Standard ML you have two major distribution that support standard concurrency library CML, Concurrent ML extension implemented as part of base library with its own scheduler implemented in Standard ML. This library is supported by SML/NJ and MLton compilers, so N2O for Standard ML supports both of them out of the box.

fun run (program_name, arglist) =
let val s = INetSock.TCP.socket()
in Socket.Ctl.setREUSEADDR (s, true); Socket.bind(s, INetSock.any 8989); Socket.listen(s, 5); acceptLoop s end

Acceptor loop uses CML spawn primitive for lightweight context creation for socket connection:

fun acceptLoop server_sock =
let val (s, _) = Socket.accept server_sock
in CML.spawn (fn () => connMain(s)); acceptLoop server_sock end

WebSocket RFC 6455

In pure and refined web stack WebSocket and static HTTP server could be unified up to switch function that performs HTTP 101 upgrade:

fun switch sock = case serve sock of (req, resp) => (sendResp sock resp; if (#status resp) <> 101 then ignore (Socket.close sock) else WebSocket.serve sock (fn msg => (M.hnd (req,msg))))

101 upgrade command logic could be obtained from RFC 6455.

fun upgrade sock req = (checkHandshake req; { body = Word8Vector.fromList nil, status = 101, headers = [(“Upgrade”, “websocket”),(“Connection”, “Upgrade”), (“Sec-WebSocket-Accept”, getKey req)] })

Also we have needUpgrade flag function that checks headers:

fun needUpgrade req = case header “Upgrade” req of SOME (_,v) => (lower v) = “websocket” | _ => false

The WebSocket structure of ML language contains the decode and encoder to/from raw packets Frame and datatype used in higher level code:

datatype Msg = Text of V.vector | Bin of V.vector | Close of Word32.word | Cont of V.vector | Ping | Pong
type Frame = { fin : bool, rsv1 : bool, rsv2 : bool, rsv3 : bool, typ : FrameType, payload : V.vector }

SHA-1 RFC 3174

The getKey function is used in SHA-1 protocol which is separate RFC 3174, and also need to be implemented. Fortunately one implementation by Sophia Donataccio was existed, so we could create a smaller one.

fun getKey req = case header “Sec-WebSocket-Key” req of NONE => raise BadRequest “No Sec-WebSocket-Key header” | SOME (_,key) => let val magic = “258EAFA5-E914–47DA-95CA-C5AB0DC85B11” in Base64.encode (SHA1.encode (Byte.stringToBytes (key^magic))) end

HTTP/1.1 RFC 2068

Check handshake has only three fail cases:

fun checkHandshake req = (if #cmd req <> “GET” then raise BadRequest “Method must be GET” else (); if #vers req <> “HTTP/1.1” then raise BadRequest “HTTP version must be 1.1” else (); case header “Sec-WebSocket-Version” req of SOME (_,”13") => () | _ => raise BadRequest “WebSocket version must be 13”)

Web Server

The internal types for HTTP and WebSocket server are specify Req and Resp types that are reused in HTTP and N2O layers for all implementations:

structure HTTP = struct
Headers = (string*string) list
type Req = { cmd : string, path : string, headers : Headers, vers : string }
type Resp = { status : int, headers : Headers, body : Word8Vector.vector } end

The handler signature hides the call chain from HTTP request and WebScoket frame to binary result for returning to socket. You should create your own application level implementation to provide this abstraction that will be called in context of TCP server:

signature HANDLER = sig
val hnd : HTTP.Req*WebSocket.Msg -> WebSocket.Res end

N2O Application Server

The N2O types specifies the I/O types along with Context and run function:

signature PROTO = sig
type Prot type Ev type Res type Req
val proto : Prot -> Ev end

The N2O functor wraps the Context and runner with protocol type which provides application level functionality.

functor MkN2O(M : PROTO) = struct
type Cx = {req: M.Req, module: M.Ev -> M.Res}
fun run (cx : Cx) (handlers : (Cx -> Cx) list) (msg : M.Prot) =
(#module cx) (M.proto msg) 

Echo Application

According to provided specification we have only one chance to write echo server example application that will unveils the protocol implementation and context implementation:

structure EchoProto : PROTO = struct
type Prot = WS.Msg type Ev = Word8Vector.vector option
type Res = WS.Res type Req = HTTP.Req
fun proto (WS.TextMsg s) = SOME s | proto _ = NONE
end structure Echo = MkN2O(EchoProto)

The echo handler contains all server context packed in single structure. Here echo is a page subprotocol that is plugged as local handler for protocol runner. The router function provides module extraction that returns only echo subprotocol in context for each request. The run function is called with a given router for each message and each request:

structure EchoHandler : HANDLER = struct
fun echo NONE=WS.Ok|echo(SOME s) =WS.Reply(WS.Text s)
fun router (cx : Echo.Cx) = {req=(#req cx),module=echo}
fun hnd (req,msg) = {req=req,module=echo} [router] msg
end structure Server = MkServer(EchoHandler)

The code is provided at


MongooseIM 3.2 - Meet our Inbox

Since the last release of our enterprise-ready, highly scalable IM platform: MongooseIM 3.1, the team has been working at full speed on the expanded set of features and improvements aimed to bring MongooseIM to the next level. What comes as a result is MongooseIM 3.2. With its highlights such as full-featured Inbox, worker pool unification, PKI for BOSH and Websockets and multi-aspect TLS integration, this release becomes a big step in the evolution of our server!

What we love about MongooseIM 3.2 ?!

The latest iteration of MongooseIM not only introduces exclusive and uncommon solutions, such as the advanced inbox or multi-aspect TLS integration, it also becomes more and more open to new angles in providing a full-fledged messaging solution, that responds to global market demands.

Among the highlights of the 3.2 release, there is another bundle of Inbox improvements. The most significant one is the possibility to attach unread messages count to push notifications so application badge may be updated appropriately. Our full-featured Inbox now gets more and more mature, becoming a complete extension for virtually every chat application!

We also added PKI for BOSH and Websockets, allowing web XMPP clients to authenticate with TLS certificate.

This version certainly makes devops’ life easier. Unified worker pools, provided in 3.2, enable much more convenient and consistent configuration of outgoing connections (e.g. to databases). They also grant better stability, predictability, and inspection of their behavior.

Meet our Inbox

Inbox in MongooseIM 3.2 received a couple of nice improvements but one of them is particularly important: integration with the Event Pusher. Among other things, this component is responsible for delivering push notifications data to MongoosePush and other services, and using it with inbox opens up some new possibilities. One example would be simplifying incrementing the unread count (the “badge”) which up until now was hardcoded to “1” and required extra effort from the frontend team when trying to display the real number of notifications. Push notifications implementation was unable to access this information at the moment of event delivery and the client application had to work around this and risk being inaccurate.

With the new update, a valid number is always attached to push notifications so the applications may display it in a simple, natural and precise manner. MongooseIM 3.2 introduces a number of updates that should make your client development team much happier. All inbox query results include total unread messages count and the number of “active” (with at least one unread message) conversations. A client may also filter explicitly for active conversations only.

We would like to present our new Inbox demo, showcasing three of the newly added features:

  1. Unread messages counter implemented for push notifications
  2. New filtering option - a user may choose to query for conversations with unread messages only
  3. Extended error description - error messages readability is improved.

Examples in the demo are prepared with the Forward application.

Worker pools - unite!

When you take a look at MongooseIM 3.1 - Inbox got better, testing got easier you may notice an inconspicuous paragraph “Worker pool unification”. This small internal improvement expanded into one of the most satisfying cleanups in MongooseIM in recent history. It affects not only the server’s inner workings but also changes the way pools are configured.

Until 3.1.x (included) the config schema varied between multiple connection types:

  1. For RDBMS there was an odbc_server key (misleading, right?) with several unlabeled elements (you had to remember their order or use docs). Pool size was defined under separate key. Oh, right, there was yet another key to define the pool names alone. And you couldn’t define common pool for all XMPP domains.
  2. For HTTP, we had an http_connections key. Pool size was provided as one of key-value pairs.
  3. For Cassandra you could specify the pool size as well. As one of cassandra_servers tuple elements. And so on…

You can clearly see that various aspects of configuration (how pool size was provided, was the pool global or per-host) were inconsistent and could lead to confusion.

In 3.2 these connections are governed by a dedicated server logic, which in future will allow us to expose generic metrics or implement extra debugging tools. Configuration-wise, all of these tuples have been replaced with outgoing_pools list, where every element follow simple convention: {Type, Host, Tag, PoolOpts, ConnectionOpts}. Type resolves to one of implemented pool specifications, such as rdbms, riak or http. Host indicates if there should be only one pool for all XMPP domains, separate pools for every XMPP domain or just one pool for one of the domains. Tag is a name that can be referenced in specific extensions. It allows to e.g. have separate RDBMS pools for write and read operations. PoolOpts are pool-centric options, such as worker count or worker selection strategy. ConnectionOpts are items specific to the chosen connection driver.

Our work here is still not finished. We would like to leverage the common logic even further. It is possible to create more interesting metrics. We also plan to make existing extensions more flexible, as currently many of them use hardcoded pool tags or even hosts. It is a temporary state so you may expect it to improve in future releases!

Certificates everywhere

Security is paramount whether we are talking enterprise or not. SASL EXTERNAL authentication mechanism was introduced in MongooseIM 2.2. As a reminder: it allows XMPP clients to authenticate with TLS certificates (without extra password). It was originally implemented only for raw TCP connections so web clients could not benefit from this feature. MongooseIM 3.2 extends this method’s capabilities and now it may be used with Websocket (wss://) and BOSH (https://) connections.

PKI authentication is the current industry standard in terms of secure authentication, so MongooseIM 3.2 brings this feature to a whole new audience.

Other improvements

MongooseIM 3.2, just like previous releases, includes many (non-)technical improvements. A large part of them is purely internal or developer-focused. See the changelog to read about all of them. Here are some highlights!

Migration guide

We all know that it’s a good idea to always keep your software up to date. One exception is a certain OS which may suddenly remove all of your data after the automatic upgrade. In our effort to steadily improve MongooseIM’s quality, intuitiveness and feature set, sometimes bold changes have to be made. We are aware that sometimes adjusting config file or custom code base may not be that straightforward.

For all of these users, who always stick to the most recent release of MongooseIM, we’ve created a Migration Guide section in our documentation. Each page will describe what needs to be changed to migrate between subsequent versions. The current chapter is “3.1.1 to 3.2” and it may be found here:


As a part of our effort to improve consistency in our source code and the repository, we have changed the name of MongooseIM’s main config file from ejabberd.cfg to mongooseim.cfg. It is definitely more intuitive. However, existing projects, which deploy MongooseIM releases with custom scripts, may need to have their pipelines updated.


Please feel free to read the detailed changelog. Here, you can find a full list of source code changes and useful links.


Special thanks to our contributors: @getong @igors @justinba1010

Test our work on MongooseIM 3.2 and share your feedback

Help us improve the MongooseIM platform:

  1. Star our repo: esl/MongooseIM
  2. Report issues: esl/MongooseIM/issues
  3. Share your thoughts via Twitter: @MongooseIM
  4. Download Docker image with new release
  5. Sign up to our dedicated mailing list to stay up to date about MongooseIM, messaging innovations and industry news.
  6. Check out our MongooseIM product page for more information on the MongooseIM platform.


20 years of open source Erlang: OpenErlang Interview with Robert Virding

This is our last OpenErlang Interview of the series! We’ve spoken to some serious Erlang players over the last couple of months including massive companies such as WhatsApp and AdRoll who use Erlang, and enthusiasts and champions of tech and open source languages. We started off our celebrations with an interview with Robert Virding and Joe Armstrong - the creators of Erlang!

How Erlang and its community has grown over the last 20 years is a testament to its creators, who developed the language in the Ericsson labs during the 1980s.

Robert Virding needs no introduction, so let’s go straight into our interview as he speaks about WHY Erlang was made including its scalability and robustness, and where he sees the language going from here.

We have the transcript listed at the bottom of this blog post.

About Robert

Robert Virding is one-third of why Erlang exists; along with Joe Armstrong and Mike Williams, Robert developed Erlang in 1986 and continued to use it solely at Ericsson before the language was released as open sourced in 1998.

Robert originally worked extensively on improving garbage collection and functional languages but has since developed his entrepreneurial spirit having begun the first Erlang startup - Bluetail.

The co-creator was an early member of the Ericsson Computer Science Lab and currently works as the Principal Language Expert at Erlang Solutions, as well as a keen speaker and educator.

About Erlang

Erlang was created in the Ericsson labs in the mid-80s by Robert Virding, Joe Armstrong and Mike Williams where Ericsson continued to use it until it was made open source in 1998. Jane Walerud was a key figure in helping the language become open source as well.

The name “Erlang” came from an abbreviation of “Ericsson Language” along with reference to the Danish mathematician and engineer Agner Krarup Erlang who invented fields of traffic engineering and queueing theory.

The language was designed with the aim of improving the development of telephony applications, and a garbage-collected runtime system. The key positive of Erlang is the “write once, run forever” motto coined by Joe Armstrong in an interview with Rackspace in 2013. Other characteristics of Erlang include:

• Fault-tolerant • Soft real-time • Immutable data • Pattern matching • Functional programming • Distributed • Highly concurrent • Highly available • Hot swapping

The successes of the language can be often due to “robustness” - the ability to run multiple processes whilst still maintaining the amazing speed and efficiency. Error handling is non-local so if one process breaks or encounters a bug, it continues running!

Erlang can be used interchangeably with Erlang/OTP.

Interview Transcript

At work with the boss breathing down your neck? Or don’t want to be one of those playing videos out loud on public transport? Here’s the transcript, although not as exciting as the real thing.

Robert Virding: Erlang is a programming language, and the system around that, for building systems which are highly concurrent and fault-tolerant. Systems where a lot of things are going on at the same time and you don’t want the system to crash when things go wrong…because they always go wrong! We originally worked for telephone switches but now it’s being used for a lot of different things, for example, web servers. If you have a big web server, of course, you want a lot of things going on at the same time.

It’s quite fascinating because it’s being used for things we had never dreamed of when we originally created the language, that would come along later.

2018 was interesting, the fact this was 20 years ago Erlang became open source and available to users outside Ericsson. Since then it’s been spreading. It is still spreading and being used in more things with different type of users, and the other languages written on top of it. So in that sense it’s an interesting year because it’s just getting bigger and bigger and spreading more and more.

Erlang has been used in a lot of different places I never dreamed of. I’m honestly impressed with what people do with it.

It’s used, for example, in online betting servers, some of those are using Erlang. You, as a user, see it but you don’t notice it’s there. Classic case, of course, is WhatsApp which uses Erlang.

When you’re sending data through your mobile phone, you’ll be sending it through the systems written in Erlang. You don’t see that. They’re just there and they just work.

Erlang’s important because a lot of companies have problems that need these features that Erlang provides. They [these companies] want to do a lot of things at the same time. They need fault tolerance. Their system must just not go down, that’ll give them a really bad reputation and lose clients and things like this. They have these issues and they find out, yes, of course we want all these things that Erlang can provide even though we might not of thought that from the beginning of this. When they realise that then Erlang becomes a good solution to them and their problem.

[00:01:55] [END OF AUDIO]

OpenErlang; 20 Years of Open Sourced Erlang

Erlang was originally built for Ericsson and Ericsson only, as a proprietary language, to improve telephony applications. It can also be referred to as “Erlang/OTP” and was designed to be a fault-tolerant, distributed, real-time system that offered pattern matching and functional programming in one handy package.

Robert Virding, Joe Armstrong and Mike Williams were using this programming language at Ericsson for approximately 12 years before it went open source to the public in 1998. Since then, it has been responsible for a huge number of businesses big and small, offering massively reliable systems and ease of use.

OpenErlang Interview Series

As mentioned, this isn’t the first in the #OpenErlang Interview series. We have three more existing videos to enjoy.

Robert Virding and Joe Armstrong

It only seems fitting to have launched with the creators of Erlang; Robert Virding and Joe Armstrong (minus Mike Williams). Robert and Joe talk about their journey with Erlang including the early days at Ericsson and how the Erlang community has developed.

Christopher Price

Last week was the launch of our second #OpenErlang Interview from Ericsson’s Chris Price. Currently the President of Ericsson’s Software Technology, Chris has been championing open source technologies for a number of years.

Chris chats to us about how Erlang has evolved, 5G standardization technology, and his predictions for the future.

Jane Walerud

Jane is a serial entrepreneur of the tech persuasion. She was instrumental in promoting and open sourcing Erlang back in the 90s. Since then, she has continued her entrepreneurial activities, helping launch countless startups within the technology sector from 1999 to present day. Her work has spanned across many influential companies who use the language including Klarna, Tobil Technology, Teclo Networks and Bluetail, which she founded herself.

Other roles have included Member of the Board at Racefox, Creades AB and Royal Swedish Academy of Engineering Sciences, and a key role in the Swedish Government Innovation Council.

Simon Phipps

Having become an open source programming language, Erlang was allowed to flourish. It gained a passionate following which has since developed into a close community. Simon Phipps dedicates his time to open source promoting languages such as Erlang through the Open Source Initiative and other similar schemes.

Why are open source languages such as Erlang so important? Find out more!

Anton Lavrik

Anton is a Server Engineer at one of the biggest mobile applications in the world. Yep, we’re talking about WhatsApp, which runs on Erlang! WhatsApp is capable of sending billions of messages every single day, and Erlang’s stability and concurrency is a significant reason why it’s perfect for the amount of traffic WhatsApp received. Discover WhatsApp’s Erlang journey with Anton.

Miriam Pena

Miriam has been an engineer at AdRoll for nearly 5 years. During that time, she has worked with Erlang, Elixir, AWS, Python, Dynamodb, Kinesys and Memcached. Currently, AdRoll’s real time bidding system handles a peak volume of 1.5 Million req every SECOND. Read more about how AdRoll use Erlang.

The #OpenErlang Parties

Catch up with our Stockholm party held in May 2018!

If you’re interested in contributing and collaborating with us at Erlang Solutions, you can contact us at


Retiring old performance pitfalls

Erlang/OTP 22 will bring many performance improvements to the table, but most of them have a broad impact and don’t affect the way you write efficient code. In this post I’d like to highlight a few things that used to be surprisingly slow but no longer need to be avoided.

Named fun recursion

Named funs have a neat little feature that might not be obvious at a first glance; their name is a variable like any other and you’re free to pass it to another function or even return it.

deepfoldl(F, Acc0, L) ->
    (fun NamedFun([_|_]=Elem, Acc) -> lists:foldl(NamedFun, Acc, Elem);
         NamedFun([], Acc) -> Acc;
         NamedFun(Elem, Acc) -> F(Elem, Acc)
     end)(L, Acc0).

This is cool but a bit of a headache for the compiler. To create a fun we pass its definition and free variables to a make_fun2 instruction, but we can’t include the fun itself as a free variable because it hasn’t been created yet. Prior to OTP 22 we solved this by creating a new equivalent fun inside the fun, but this made recursion surprisingly expensive both in terms of run-time and memory use.

As of OTP 22 we translate recursion to a direct function call instead which avoids creating a new fun. Other cases still require recreating the fun, but they’re far less common.

Optimize named funs and fun-wrapped macros #1973

List subtraction with large operands (– operator)

While the Erlang VM appears to be pre-emptively scheduled to the programmer, it’s co-operatively scheduled internally. When a native function runs it monopolizes the scheduler until it returns, so a long-running one can severely harm the responsiveness of the system. We’ve therefore written nearly all such functions in a style that breaks the work into short units that complete quickly enough, but there’s a steadily shrinking list of functions that misbehave, and list subtraction was one of these.

It’s usually pretty straightforward to rewrite functions in this style, but the old algorithm processed the second list in a loop around the first list, which is problematic since both lists can be very long and resuming work in nested loops is often trickier than expected.

In this case it was easier to just get rid of the nested loop altogether. The new algorithm starts out by building a red-black tree from the right-hand side before removing elements from the left-hand side. As all operations on the tree have log n complexity we know that they will finish really quickly, so all we need to care about is yielding in the outer loops.

This also had the nice side-effect of reducing the worst-case complexity from O(n²) to O(n log n) and let us remove some warnings from the reference manual and efficiency guide. It’s worth noting that the new implementation is always faster than the proposed workarounds, and that it falls back to the old algorithm when it’s faster to do so.

This change will be rolled out in OTP 21.2, big thanks to Dmytro Lytovchenko (@kvakvs on GitHub) for writing the better half of it!

Optimize list subtraction (A – B) and make it yield on large inputs #1998

Lookahead in bit-syntax matching

The optimization pass for bit-syntax matching was completely rewritten in OTP 22 to take advantage of the new SSA-based intermediate format. It applies the same optimizations as before so already well-optimized code is unlikely to see any benefit, but it manages to apply them in far more cases.

For those who aren’t familiar, all bit-syntax matching operates on a “match context” internally, which is a mutable object that keeps track of the current match position. This helps a lot when matching complicated patterns as it can zip back and forth as required, saving us from having to match components more than once.

This is great when matching several different patterns, but it comes in real handy in loops like the following:

trim_zero(<<0,Tail/binary>>) -> trim_zero(Tail);
trim_zero(B) when is_binary(B) -> B.

As the compiler can see that Tail is passed directly to trim_zero, which promptly begins with a bit-match, it can skip extracting Tail as a sub-binary and pass the match context instead. This is a pretty well-known optimization called “match context reuse” which greatly improves performance when applied, and a lot of code has been written with it in mind.

The catch of passing a match context like this is that we have to maintain the illusion that we’re dealing with an immutable binary. Whenever it’s used in a non-matching expression we either need to convert the context to an equivalent binary, or admit defeat and skip the optimization.

While the compiler did a pretty good job prior to OTP 22 it gave up a bit too easily in many cases, and the most trivial example is almost funny:

calls_wrapper(<<"hello",Tail/binary>>) ->

%% This simple wrapper prevents context reuse in the call above. :(
count_ones(Bin) -> count_ones_1(Bin, 0).

count_ones_1(<<1, Tail/binary>>, Acc) -> count_ones_1(Tail, Acc + 1);
count_ones_1(<<_, Tail/binary>>, Acc) -> count_ones_1(Tail, Acc);
count_ones_1(<<>>, Acc) -> Acc.

A trickier example can be found in the string module:

bin_search_inv_1(<<CP1/utf8, BinRest/binary>>=Bin0, Cont, Sep) ->
    case BinRest of
        %% 1
        <<CP2/utf8, _/binary>> when ?ASCII_LIST(CP1, CP2) ->
            case CP1 of
                Sep ->
                    %% 2
                    bin_search_inv_1(BinRest, Cont, Sep);
                _ ->
                    %% 3
        %% ... snip ...

What we’re looking at is a fast-path for ASCII characters; when both CP1 and CP2 are ASCII we know that CP1 is not a part of a grapheme cluster and we can thus avoid a call to unicode_util:gc/1. It’s not a particularly expensive function but calling it once per character adds up quickly.

At first glance it might seem safe to pass the context at 2, but this is made difficult by Bin0 being returned at 3. As contexts are mutable and change their position whenever a match succeeds, naively converting Bin0 back to a binary would give you what comes after CP2 instead.

Now, you might be wondering why we couldn’t simply restore the position before converting Bin0 back to a binary. It’s an obvious thing to do but before OTP 22 the context tracked not only the current position but also previous ones needed when backtracking. These were saved in per-context “slots” which were mutable and heavily reused, and the match at 1 clobbered the slot needed to restore Bin0.

This also meant that a context couldn’t be used again after being passed to another function or entering a try/catch, which made it more or less impossible to apply this optimization in code that requires looking ahead.

As of OTP 22 these positions are stored outside the context so there’s no need to worry about them becoming invalid, making it possible to optimize the above cases.

Rewrite BSM optimizations in the new SSA-based intermediate format #1958


How many functions do you have?

A while back this question popped up in one of the many places where the Erlang community gathers, I think it was IRC:

How many functions do you have in your Erlang node?

As an answer, Craig posted this beauty:

1> length([{M,F,A} || {M,_} <- code:all_loaded(), {exports,Fs} <- M:module_info(), {F,A} <- Fs]).
2> length([{M,F,A} || {M,_} <- code:all_loaded(), {exports,Fs} <- M:module_info(), {F,A} <- Fs]).
3> {ok,2}.
4> length([{M,F,A} || {M,_} <- code:all_loaded(), {exports,Fs} <- M:module_info(), {F,A} <- Fs]).
Original from KissPNG

What’s going on here?

As usual, it’s better if you try understanding this by yourself. If you are up for the challenge, just boot up an Erlang VM and try to figure out what’s happening.

I’ll jump straight to the answer instead: Erlang has Dynamic Code Loading. As the docs point out…

The code server loads code according to a code loading strategy, which is either interactive (default) or embedded. In interactive mode, code is searched for in a code path and loaded when first referenced. In embedded mode, code is loaded at start-up according to a boot script. This is described in System Principles.

Alright, mystery solved: The new functions that appear once we evaluate the list comprehension a second time are loaded right there because they’re needed to evaluate the first list comprehension and print its result.

Let’s see if our observations match with that hypothesis. And just for fun, let’s do it in Elixir. First, let’s see if the same behavior is experienced in iex, too:

iex(1)> length(for {m, _} <- :code.all_loaded, {:exports, fs} <- m.module_info(), {f, a} <- fs, do: {m, f, a})
iex(2)> length(for {m, _} <- :code.all_loaded, {:exports, fs} <- m.module_info(), {f, a} <- fs, do: {m, f, a})

Excellent! Now let’s see what functions are those new ones. Actually, since modules are loaded all at once (i.e. code server doesn’t load individual functions, it loads the entire module when it needs it), let’s see which modules are the new ones. To do that, we need to start a new VM, of course, and then…

iex(1)> ms = :code.all_loaded; length(ms)
iex(2)> new_ms = (:code.all_loaded -- ms)
{Inspect.Integer, 'path/to/Elixir.Inspect.Integer.beam'},
{Inspect.Algebra, 'path/to/Elixir.Inspect.Algebra.beam'},
{Inspect.Opts, 'path/to/Elixir.Inspect.Opts.beam'},
{Inspect, 'path/to/Elixir.Inspect.beam'}

And there you have it, the new modules are no others than the ones required to print out 172 on the screen.

Spawnfest 2018 is Around the Corner

We are less than a month away from Spawnfest 2018, the annual international online FREE 48-hour-long hackathon for the BEAM community!

For this year we have an amazing panel of judges (Miriam Pena, Kenji Rikitake, Juan Facorro, René Föhring, Tristan Sloughter and Andrea Leopardi), some great sponsors (Erlang Solutions, AdRoll, Peer Stritzinger, ScrapingHub, 10Pines, Fiqus and LambdaClass) and, more importantly… some AMAZING PRICES!! You can win tickets to conferences, gift cards from Amazon, books, GRiSP boards, GoPros, headsets,, t-shirts…

Build your team, register at our website and join us on November 24th and 25th! It will be great!

If you don’t have a team, you can register your self in our super find-a-team machine™️ and we will help you find one.

And if you don’t have an idea, check our list of suggestions.

How many functions do you have? was originally published in Erlang Battleground on Medium, where people are continuing the conversation by highlighting and responding to this story.


Copyright © 2016, Planet Erlang. No rights reserved.
Planet Erlang is maintained by Proctor.