FIPS for Erlang & Elixir Systems

FIPS (Federal Information Processing Standards)[1] are a set of standards defined by the NIST (National Institute of Standards and Technology) to provide a means to govern and control how computer systems inter-operate in industry approved, credible and acceptable methods. FIPS-140 isn’t a necessity across the board, but if you’re operating in uncompromising sectors such as those found in governmental systems, I highly recommend you work to become FIPS-140 compliant. The FIPS-140 publication defines the standard requirements pertaining to cryptographic modules and functions employed in a computer systems, in all phases inclusive of, but not limited to, the design, implementation, installation, and usage.

The requirement for Erlang based systems such as RabbitMQ, MongooseIM, Riak, WombatOAM to meet FIPS-140 compliancy are on a rapid surge, due to their dense usage in mission critical environments, such as those found in governmental sectors where the security requirements are uncompromising, and of uttermost importance. These Erlang systems are comprised of multiple applications that operate as part and parcel of a stack of other underlying applications and subsystems, such as the Erlang Virtual Machine and Operating System. Thus, in order to fulfill FIPS-140 compliance, considerations must cut across these underlying, enabling, layers as well, with an emphasis on the operation and interaction with the validated FIPS security module(s). The following is a graphic illustration and discussion of the components and aspects considered in an Erlang/Elixir system in order to conform to the FIPS-140 standard.

Fig 1: Components and aspects to be considered for FIPS-140 compliance

1. Hardware

FIPS-140 defines platform security requirements, from the hardware to the operating system along with its security libraries. The requirements on hardware span from classifying the types of hardware security modules, such as single-chip and multi-chip embedded cryptographic modules, in which aspects such as the physical covering and coating for protection against tempering are defined, along with other internal aspects such as zeroization of sensitive “on-chip” information, and more. In the scope of Erlang, embedded systems being developed and shipped with, for example Erlang-ALE (and Elixir-ALE or Nerves, for embedded Elixir systems) would need to take such factors into consideration on both firmware implementation, and hardware fabrication.

2. Operating Systems and Virtualisation

The requirements on operating systems govern the types (is it a trusted OS or not) and the manner in which the cryptographic software modules employed will function with regards to the provision of data protection as defined in FIPS-140 standard. These requirements are vast, and to the pleasure of developers and the community, libraries such as OpenSSL implement these feature sets, which, with good understanding, need be configured and enabled for FIPS-140 compliance upon execution. FIPS-140 recommends a modular approach to software design and implementation, and OpenSSL follows suit, realising FIPS-140 through a validated OpenSSL FIPS Object Module. Other libraries such as GnuTLS also incorporate FIPS-140 compliance as preferred by the user.

Also within the scope of aspects to be considered, encompassing operating systems fulfilling FIPS-140, are the spawnable virtual machines and instances, which encapsulate and act as containers for efficient provisioning of various applications. These could be in the form of fully-fleshed operating systems such as those provisioned by automation tools such as Docker, Kubernetes, CoreOS to application runtime virtual machines such as the Erlang Virtual Machine, JVM, .NET, etc.

Fig 2: Erlang

3. Application

Applications executing on the aforementioned hardware and operating system platforms also need to be compliant with FIPS-140 regulations. At this layer of a computer system, for the Erlang and Elixir ecosystem, we find systems and applications such as RabbitMQ, MongooseIM, Riak, Megaload, Phoenix and so forth, which would need to be altered via configuration or implementation in order to meet FIPS-140 requirements. These changes vary in complexity, and have dependencies on aspects such as the deployed version in use. (For instance, some legacy systems may possess less configurable features to cater for such alterations, and thus could require more specialist effort to achieve FIPS-140 compliance). The scope on applications is very broad, covering all intrinsic security related matters within the application itself, along with how these interact with the underlying platform on which they’re executed.

Fig 3: Erlang/Elixir Systems

4. Communications

In addition to an application being intrinsically FIPS-140 compliant, together with its underlying platform components, the security pertaining to how it communicates and interacts with other applications has to be FIPS-140 compliant as well. All communication, along with all post and pre-communication procedures carried out must be secure.

These requirements cross over a majority of the application’s intrinsic functions, with emphasis on the communication aspects. The Transport Layer Security (TLS) protocol is the dominant standard by which secure communications between multiple nodes is established. For Erlang based systems, this is a critical factor, as the backbone of most, if not all, interactions of multi-node deployments are based on the Erlang Distribution protocol. The implication is a need for a secure Erlang Distribution setup, in conformance to FIPS-140. Such multi-node/cluster deployments don’t come “out-of-box” by default, and require knowledgeable expertise in this arena to implement. Additionally if any other internode communication protocol is employed, similar conformance procedures to fulfill FIPS-140 would have to be put in place.

5. Operations

FIPS-140 defines requirements regarding security measures on post implementation procedures, for data acquisition functions, which span across aspects such as the system’s primary operational functions, audit purposes, OAM, and so forth, where the system in question is queried for various information during its uptime. Querying procedures could be typical request response procedures in client server architectures, or fetching of performance data such as statistics and and other queryable, exposed information deemed useful. Conformance to FIPS-140 in operational procedures are closely tied to the requirements stipulated on Communications.

Secure channels may have to be established for procedures such as statistics acquisition being rendered on web user interfaces, or logs being securely shipped to remote servers, and so forth.


That’s about it on the overview of aspects to be considered when provisioning FIPS-140 compliant Erlang/Elixir based systems. Alterations which would need to be implemented would vary depending on the application. Complexities involved would depend on aspects such as the system’s architectural design, implementation, dependencies in use, and more.

Each considered aspect would need a strong understanding of the system in question, hence vital activities such as architecture, code and deployment reviews are part of the background preparations that would need to be established prior to implementing any proposed FIPS-140 conformance changes. A FIPS-140 compliant labelled system definitely carries more weight and a strong reputation, bearing in mind the impression of trust it possesses and arenas of usage where it may be found, of strict security constraints.

Get in touch

For any questions regarding FIPS-140 compliance for your Erlang/Elixir systems, be it a custom application, a Phoenix application, or a RabbitMQ, Riak, MongooseIM, WombatOAM installation, and Erlang/Elixir based embedded systems, don’t hesitate to get in touch

  • FIPS-140 compliance
  • FIPS-140 recommendations
  • Design and implementation to fulfil FIPS-140
  • Training for FIPS-140 compliance




What's new in Phoenix development - February 2018

With the new year, the Phoenix team has been making steady progress towards a 1.4 release with some great new features. There’s still a few milestones yet to hit before release, but in master you’ll find HTTP2 support, faster development compile-times, new JSON encoding, and more. Let’s dive in and take a tour of the progress we’ve made over the last few months.

HTTP2 Support

Phoenix 1.4 will ship with HTTP2 support thanks to the release of Cowboy2 and the work of Phoenix core-team member Gary Rennie with his Plug and Phoenix integration. Phoenix will be released with Cowboy2 as opt-in, to allow more time for its development to settle. Future releases of Phoenix will ship with HTTP2 support by default once all the dust settles. For those that want HTTP2 today, opting into H2 support will be as easy as changing the :cowboy dependency to “~> 2.0” and specifying the handler in your endpoint configuration. HTTP2 brings server push capabilities and reduced latency. Check out Wikipedia’s overview of HTTP2 to learn more.

Faster development compilation

One of Phoenix’s core strengths is its speed, and as we like to remind folks, this goes beyond those microsecond response times you see in the server logs. Production speed is just one part of performance. Fast response times are great, but if your development workflow is tedious or your test suite is painfully slow, the production wins come at a great cost to productivity.

Fortunately Elixir and Phoenix optimize for the entire development process. Whether you’re running tests, developing your application, or serving requests to end-users, your application should be as fast as possible and use as many CPU cores and resources as we can take advantage of.

With this in mind, the Phoenix team is always looking where we can improve performance both in production as well as development. Some users with large applications were seeing increasingly long compile-times. This was tracked down to the Phoenix router causing large compile-time dependencies across the code-base. During development, some users were experiencing large recompiles of all these dependent modules in their application.

A deeper look at the problem

To understand why the Phoenix router causes compile-time dependencies, we have to dive deeper into the way Plug works and showoff a little bit of metaprogramming that’s happening under the hood.

Let’s say you define an AuthenticateUser plug which accepts options on how to lookup the user from the session.

defmodule MyAppWeb.AuthenticateUser do
  def init(opts), do: Enum.into(opts, %{session_key: "user_id"})
  def call(conn, %{session_key: key}) do
    case conn.session[key] do

To optimize the session_key lookup at runtime, we convert the keyword list passed to the plug into a map, as well as assign defaults to the options. By doing this coercion and defaulting in init, plug will perform this work at compile time. This allows us to skip this work at runtime. Every request will then be passed the already coerced options, which is a great way to optimize unnecessary runtime overhead.

The side-effect of this optimization is we must invoke AuthenticateUser.init/1 at compile-time to perform the work. This is where the compile-time dependencies begin to build and cascade. We can see why this happens by looking at the code that is generated underneath our plug call. When you plug a module in your Router, like so:

pipeline :browser do
  plug MyAppWeb.AuthenticateUser, session_key: "uid"

The following code is generated:

case, %{session_key: "uid"}) do
  %Plug.Conn{halted: true} = conn ->
  %Plug.Conn{} = conn ->
    case ... do # further nested plug calls
  _ ->
    raise("expected to return a Plug.Conn")

Notice how our case statement includes the final %{session_key: "uid"} options? This is because we called AuthenticateUser.init/1 at compile-time, and generated the code above to be run at runtime. While this is great for production, we want to avoid the compile-time call in development to gain faster refresh-driven-development times as we’re constantly recompiling the project in development.

Implementing the solution

To get the best of both worlds, the solution is easy. We can generate the compile-time optimized code in production and test environments, while invoking our init calls at runtime in development. This prunes our compile-time dependencies at the cost of small runtime work. For development environments, we’ll never notice the difference since the application is never under load.

To implement the fix, the Phoenix team introduced a new init_mode option to Plug.Builder.compile/3 which configures where the plug’s init/1 is called – :compile for compile-time (default), or :runtime. Phoenix supports this new configuration via the following mix config:

config :phoenix, :plug_init_mode, :runtime

With this in place, our generated AuthenticateUser code would look like this in dev:

case, AuthenticateUser.init(session_key: "uid")) do
  %Plug.Conn{halted: true} = conn ->
  %Plug.Conn{} = conn ->
    case ... do # further nested plug calls
  _ ->
    raise("expected to return a Plug.Conn")

Now every request to our application calls AuthenticateUser.init/1 with our plug options, since this is now a runtime call to coerce the final options. The end result is faster development compilation while maintaining production optimized code.

New Projects use Elixir’s new 1.5+ child_spec

Also coming in the next Phoenix release is the inclusion of the new Elixir 1.5+ streamlined child_spec’s.

Prior to Elixir 1.5, your Phoenix projects’ application.ex had code like this:

# lib/my_app/applicatin.ex
import Supervisor.Spec

children = [
  supervisor(MyApp.Repo, []),
  supervisor(MyApp.Web.Endpoint, []),
  worker(MyApp.Worker, [opts]),

Supervisor.start_link(children, strategy: :one_for_one)

New projects will have the following specification:

children = [
  {Foo.Worker, opts},

Supervisor.start_link(children, strategy: :one_for_one)

The new Elixir 1.5+ child_spec streamlines how child processes are started and supervised, by pushing the child specification down to the module, rather than relying on the developer to know if they need to start a worker or supervisor. This is great to not only prevent bugs, but to also allow your architecture to start simple and grow as needed. For example, you could start with a single worker process and later grow that worker into an entire supervision tree. Any caller’s using your simple worker in their supervision tree won’t require any code change once you level-up to a supervised tree internally. This is a huge win for maintenance and composability.

Explicit Router helper aliases

We have also removed the imports of the MyAppWeb.Router.Helpers from your web.ex in newly generated applications, instead favoring an explicit alias:

alias MyAppWeb.Router.Helpers, as: Routes

This will change your code in controllers and views to call the router functions off the new alias, so instead of:

redirect(conn, to: article_path(conn, :index))

We are promoting:

redirect(conn, to: Routes.article_path(conn, :index))

This makes it much less confusing for newcomers and seasoned developers alike when coming into a project. Now figuring out where router functions are defined or where to look up documentation in IEx or ExDoc is obvious when you come across the Routes alias. It also prevents confusing circular compiler errors when trying to import the Router helpers into a plug module that also is plugged from the router.

New default JSON encoder with Jason library

The next Phoenix release will also include Jason, the new JSON encoding library, written by Michał Muskała of the Elixir core-team. Jason is the fastest pure Elixir JSON encoder available, even beating c-based encoding libraries under certain scenarios. It is also maintained by an Elixir core-team member which makes it a natural choice for projects looking to get the best out of their applications. New applications will include the following mix configuration to support the Jason library:

config :phoenix, :json_library, Jason

The Phoenix team will be busy fine-tuning these new features ahead of our next release. We’ll see you next month with our latest updates!


Why Ecto's Way of Storing Embedded Lists of Maps Makes Querying Hard

The Problem

We have a bunch of embedded models on an Ecto model. They’re self-contained and don’t need to be normalized or unique. They need to be flexible and lightweight and not have a lot of overhead in either UX or code.

Here’s a migration that defines the embeds_many column according to how Ecto tells you to.

defmodule Grocer.Repo.Migrations.CreateFoods do
  use Ecto.Migration

  def change do
    create table("foods") do
      add :name, :string
      add :upc, :string
      add :ingredients, {:array, :map}, default: []

And the associated schemas:

schema "foods" do
  field :name, :string
  field :upc, :string
  embeds_many :ingredients, Grocer.Ingredient
embedded_schema do
  field :name, :string
  field :mg, :integer

Pretty standard stuff. Anyone who’s defined an embeds_many has seen that before. I’ve found that users of ORMs like Ecto, etc. don’t always look at the resulting database table, so let’s look. When we go into psql and enter \d foods, we’ll see this:

                                   Table "public.foods"
   Column    |          Type          |                     Modifiers
 id          | bigint                 | not null default nextval('foods_id_seq'::regclass)
 name        | character varying(255) |
 upc         | character varying(255) |
 ingredients | jsonb[]                | default ARRAY[]::jsonb[]
    "foods_pkey" PRIMARY KEY, btree (id)

Where Ecto’s {:array, :map} column type gives us jsonb[] in the database. This means “a Postgres array of jsonb values”.

And there’s the thing: Postgres has an Array type of its own (and has for a while). This is well and good and well-supported, but they don’t work like JSON arrays do. But you know what does work like JSON? JSON.

Getting our hands dirty

I wanted to see how well an array of JSON would stack up against a single JSON document that contains an array. Since we have the JSONB[] column already, let’s start checking there. Important note: JSONB and JSON are functionally equivalent, but have some important under-the-hood differences. Put briefly: You want JSONB. You don’t want JSON.

I inserted 1 million rows using this quick script which isn’t relevant but I’ll link because the names it made entertained me greatly (and I want you to be entertained, too). Through the magic of EXPLAIN ANALYZE, we can see how the performance will be. Our SQL statement:

EXPLAIN ANALYZE SELECT name FROM foods WHERE ARRAY_TO_JSON(ingredients_::JSONB @> '[{"name": "Milk"}]';

We get these results:

                                                  QUERY PLAN
 Seq Scan on foods  (cost=0.00..50729.56 rows=1000 width=23) (actual
time=0.134..4395.623 rows=287015 loops=1)
   Filter: ((array_to_json(ingredients))::jsonb @> '[{"name": "Milk"}]'::jsonb)
   Rows Removed by Filter: 712980
 Planning time: 0.040 ms
 Execution time: 4411.212 ms

4.4 seconds? Dag, yo. That’s not really tenable.

Ok, so that’s one option. The other option is simply JSONB. I changed the column to real :jsonb in the migration (note, the default is '[]' and not [] like it was before):

      add :ingredients, :jsonb, default: "[]"

And Postgres said the column type become this:

 ingredients | jsonb                  | default '[]'::jsonb

With no index on the column, the equivalent query from before, EXPLAIN ANALYZE SELECT * FROM foods WHERE ingredients @> '[{"name": "Milk"}]';, looked like this:

                                                  QUERY PLAN
 Seq Scan on foods  (cost=0.00..39618.01 rows=1000 width=187) (actual time=0.040..575.419 rows=208370 loops=1)
   Filter: (ingredients @> '[{"name": "Milk"}]'::jsonb)
   Rows Removed by Filter: 791631
 Planning time: 0.036 ms
 Execution time: 585.420 ms

Better! But, let’s be honest, ~0.6 seconds still isn’t great. However, a very important part about JSONB columns is that you can index them. And you can’t really expect good performance out of a non-trivial query without good indexing.

To add an index, we can add execute "CREATE INDEX ingredients_gin ON foods USING GIN (ingredients);" to our migration (and execute "DROP INDEX ingredeients_gin;" to the down migration). This lets the same query as before have these results:

                                                            QUERY PLAN
 Bitmap Heap Scan on foods  (cost=27.75..3407.64 rows=1000 width=187) (actual time=42.464..248.331 rows=208370 loops=1)
   Recheck Cond: (ingredients @> '[{"name": "Milk"}]'::jsonb)
   Heap Blocks: exact=27113
   ->  Bitmap Index Scan on ingredients_gin  (cost=0.00..27.50 rows=1000 width=0) (actual time=37.778..37.778 rows=208370 loops=1)
         Index Cond: (ingredients @> '[{"name": "Milk"}]'::jsonb)
 Planning time: 0.054 ms
 Execution time: 258.965 ms

That’s better. That’s way better. It’s still not 100% perfect, but that’s serviceable for columns you’re not querying on in a tight loop. And it’s quite a bit better than the 4.4 seconds from the first example.

Can I use this?

This is all fine here in a blog post. It’s useless to me if I can’t actually use this in my code. Let’s see it in action in Elixir and not just in SQL.

iex(1)> %Grocer.Food{} |> Grocer.Food.changeset(%{name: "Egg", upc: "12345", ingredients: [%{name: "Egg", mg: 1}]}) |> Grocer.Repo.insert
[debug] QUERY OK db=42.6ms
INSERT INTO "foods" ("ingredients","name","upc") VALUES ($1,$2,$3) RETURNING "id" [[%{id: "fea00c41-7050-4795-9aca-5b5d39affa60", mg: 1, name: "Egg"}], "Egg", "12345"]
   __meta__: #Ecto.Schema.Metadata<:loaded, "foods">,
   id: 1000002,
   ingredients: [
       id: "fea00c41-7050-4795-9aca-5b5d39affa60",
       mg: 1,
       name: "Egg"
   name: "Egg",
   upc: "12345"

It certainly appears to work just as well as the recommended column type. What does the database look like?

grocer_dev=# select * from foods where upc = '12345';
   id    | name |  upc  |                               ingredients
 1000002 | Egg  | 12345 | [{"id": "fea00c41-7050-4795-9aca-5b5d39affa60", "mg":
1, "name": "Egg"}]

That looks like it works just fine!

So why do we use JSONB[]?

Postgres came out with JSON support in version 9.2. It was implemented as little more than a text blob, though. Version 9.4 has JSONB support, but that didn’t come out until the end of 2014. Ecto’s first commits for embeds_many came in mid-2015. It seems quite likely that the concern at the time was that older versions of Postgres (which would still have been very common) wouldn’t have good JSONB support and would therefore be not performant enough.

Since we’re over 3 years out from the release of Postgres 9.4, I think it’s probably been long enough to consider switching your column types, especially on new development and when you control your database or you’re using something like Heroku which keeps its databases nice and shiny.


Erlang Garbage Collector

This is an update to our previous blog post, Erlang 19.0 Garbage Collector. With Erlang/OTP 20.0, some things have changed, which is the reason for this updated blog post.

Erlang Garbage Collector

Erlang manages dynamic memory with a tracing garbage collector. More precisely a per process generational semi-space copying collector using Cheney’s copy collection algorithm together with a global large object space.


Each Erlang process has its own stack and heap which are allocated in the same memory block and grow towards each other. When the stack and the heap meet, the garbage collector is triggered and memory is reclaimed. If not enough memory was reclaimed, the heap will grow.

Creating Data

Terms are created on the heap by evaluating expressions. There are two major types of terms: immediate terms which require no heap space (small integers, atoms, pids, port ids etc) and cons or boxed terms (tuple, big num, binaries etc) that do require heap space. Immediate terms do not need any heap space because they are embedded into the containing structure.

Let’s look at an example that returns a tuple with the newly created data.

data(Foo) ->
   Cons = [42|Foo],
   Literal = {text, "hello world!"},
   {tag, Cons, Literal}.

In this example we first create a new cons cell with an integer and a tuple with some text. Then a tuple of size three wrapping the other values with an atom tag is created and returned.

On the heap tuples require a word size for each of its elements as well as for the header. Cons cells always require two words. Adding these things together, we get seven words for the tuples and 26 words for the cons cells. The string "hello world!" is a list of cons cells and thus requires 24 words. The atom tag and the integer 42 do not require any additional heap memory since it is an immediate. Adding all the terms together, the heap space required in this example should be 33 words.

Compiling this code to beam assembly (erlc -S) shows exactly what is happening.

    {put,{literal,{text,"hello world!"}}}.

Looking at the assembler code we can see three things; The heap requirement in this function turns out to be only six words, as seen by the {test_heap,6,1} instruction. All the allocations are combined to a single instruction. The bulk of the data {text, "hello world!"} is a literal. Literals, sometimes referred to as constants, are not allocated in the function since they are a part of the module and allocated at load time.

If there is not enough space available on the heap to satisfy the test_heap instructions request for memory, then a garbage collection is initiated. It may happen immediately in the test_heap instruction, or it can be delayed until a later time depending on what state the process is in. If the garbage collection is delayed, any memory needed will be allocated in heap fragments. Heap fragments are extra memory blocks that are a part of the young heap, but are not allocated in the contigious area where terms normally reside. See The young heap for more details.

The collector

Erlang has a copying semi-space garbage collector. This means that when doing a garbage collection, the terms are copied from one distinct area, called the from space, to a new clean area, called the to space. The collector starts by scanning the root-set (stack, registers, etc).

Garbage collection: initial values

It follows all the pointers from the root-set to the heap and copies each term word by word to the to space.

After the header word has been copied a move marker is destructively placed in it pointing to the term in the to space. Any other term that points to the already moved term will see this move marker and copy the referring pointer instead. For example, if the have the following Erlang code:

foo(Arg) ->
    T = {test, Arg},
    {wrapper, T, T, T}.

Only one copy of T exists on the heap and during the garbage collection only the first time T is encountered will it be copied.

Garbage collection: root set scan

After all terms referenced by the root-set have been copied, the collector scans the to space and copies all terms that these terms reference. When scanning, the collector steps through each term on the to space and any term still referencing the from space is copied over to the to space. Some terms contain non-term data (the payload of a on heap binary for instance). When encountered by the collector, these values are simply skipped.

Garbage collection: heap scan

Every term object we can reach is copied to the to space and stored on top off the scan stop line, and then the scan stop is moved to the end of the last object.

Garbage collection: heap scan

When scan stop marker catches up to the scan start marker, the garbage collection is done. At this point we can deallocate the entire from space and therefore reclaim the entire young heap.

Generational Garbage Collection

In addition to the collection algorithm described above, the Erlang garbage collector also provides generational garbage collection. An additional heap, called the old heap, is used where the long lived data is stored. The original heap is called the young heap, or sometimes the allocation heap.

With this in mind we can look at the Erlang’s garbage collection again. During the copy stage anything that should be copied to the young to space is instead copied to the old to space if it is below the high-watermark.

Garbage collection: heap scan

The high-watermark is placed where the previous garbage collection (described in Overview) ended and we have introduced a new area called the old heap. When doing the normal garbage collection pass, any term that is located below the high-watermark is copied to the old to space instead of the young.

Garbage collection: heap scan

In the next garbage collection, any pointers to the old heap will be ignored and not scanned. This way the garbage collector does not have to scan the long-lived terms.

Generational garbage collection aims to increase performance at the expense of memory. This is achieved because only the young, smaller, heap is considered in most garbage collections.

The generational hypothesis predicts that most terms tend to die young, and for an immutable language such as Erlang, young terms die even faster than in other languages. So for most usage patterns the data in the new heap will die very soon after it is allocated. This is good because it limits the amount of data copied to the old heap and also because the garbage collection algorithm used is proportional to the amount of live data on the heap.

One critical issue to note here is that any term on the young heap can reference terms on the old heap but no term on the old heap may refer to a term on the young heap. This is due to the nature of the copy algorithm. Anything referenced by an old heap term is not included in the reference tree, root-set and its followers, and hence is not copied. If it was, the data would be lost, fire and brimstone would rise to cover the earth. Fortunately, this comes naturally for Erlang because the terms are immutable and thus there can be no pointers modified on the old heap to point to the young heap.

To reclaim data from the old heap, both young and old heaps are included during the collection and copied to a common to space. Both the from space of the young and old heap are then deallocated and the procedure will start over from the beginning. This type of garbage collection is called a full sweep and is triggered when the size of the area under the high-watermark is larger than the size of the free area of the old heap. It can also be triggered by doing a manual call to erlang:garbage_collect(), or by running into the young garbage collection limit set by spawn_opt(fun(),[{fullsweep_after, N}]) where N is the number of young garbage collections to do before forcing a garbage collection of both young and old heap.

The young heap

The young heap, or the allocation heap, consists of the stack and heap as described in the Overview. However, it also includes any heap fragments that are attached to the heap. All of the heap fragments are considered to be above the high-watermark and part of the young generation. Heap fragments contain terms that either did not fit on the heap, or were created by another process and then attached to the heap. For instance if the bif binary_to_term created a term which does not fit on the current heap without doing a garbage collection, it will create a heap-fragment for the term and then schedule a garbage collection for later. Also if a message is sent to the process, the payload may be placed in a heap-fragment and that fragment is added to young heap when the message is matched in a receive clause.

This procedure differs from how it worked prior to Erlang/OTP 19.0. Before 19.0, only a contiguous memory block where the young heap and stack resided was considered to be part of the young heap. Heap fragments and messages were immediately copied into the young heap before they could be inspected by the Erlang program. The behaviour introduced in 19.0 is superior in many ways - most significantly it reduces the number of necessary copy operations and the root set for garbage collection.

Sizing the heap

As mentioned in the Overview the size of the heap grows to accommodate more data. Heaps grow in two stages, first a variation of the Fibonacci sequence is used starting at 233 words. Then at about 1 mega words the heap only grows in 20% increments.

There are two occasions when the young heap grows:

  1. if the total size of the heap + message and heap fragments exceeds the current heap size.
  2. if after a fullsweep, the total amount of live objects is greater than 75%.

There are two occasions when the young heap is shrunk:

  1. if after a young collection, the total amount of live objects is less than 25% of the heap and the young heap is “big”
  2. if after a fullsweep, the total amount of live objects is less than 25% of the heap.

The old heap is always one step ahead in the heap growth stages than the young heap.


When garbage collecting a heap (young or old) all literals are left in place and not copied. To figure out if a term should be copied or not when doing a garbage collection the following pseudo code is used:

if (erts_is_literal(ptr) || (on_old_heap(ptr) && !fullsweep)) {
  /* literal or non fullsweep - do not copy */
} else {

The erts_is_literal check works differently on different architectures and operating systems.

On 64 bit systems that allow mapping of unreserved virtual memory areas (most operating systems except Windows), an area of size 1 GB (by default) is mapped and then all literals are placed within that area. Then all that has to be done to determine if something is a literal or not is two quick pointer checks. This system relies on the fact that a memory page that has not been touched yet does not take any actual space. So even if 1 GB of virtual memory is mapped, only the memory which is actually needed for literals is allocated in ram. The size of the literal area is configurable through the +MIscs erts_alloc option.

On 32 bit systems, there is not enough virtual memory space to allocate 1 GB for just literals, so instead small 256 KB sized literal regions are created on demand and a card mark bit-array of the entire 32 bit memory space is then used to determine if a term is a literal or not. Since the total memory space is only 32 bits, the card mark bit-array is only 256 words large. On a 64 bit system the same bit-array would have to be 1 tera words large, so this technique is only viable on 32 bit systems. Doing lookups in the array is a little more expensive then just doing the pointer checks that can be done in 64 bit systems, but not extremely so.

On 64 bit windows, on which erts_alloc cannot do unreserved virtual memory mappings, a special tag within the Erlang term object is used to determine if something is a literal or not. This is very cheap, however, the tag is only available on 64 bit machines, and it is possible to do a great deal of other nice optimizations with this tag in the future (like for instance a more compact list implementation) so it is not used on operating systems where it is not needed.

This behaviour is different from how it worked prior to Erlang/OTP 19.0. Before 19.0 the literal check was done by checking if the pointer pointed to the young or old heap block. If it did not, then it was considered a literal. This lead to considerable overhead and strange memory usage scenarios, so it was removed in 19.0.

Binary heap

The binary heap works as a large object space for binary terms that are greater than 64 bytes (from now on called off-heap binaries). The binary heap is reference counted and a pointer to the off-heap binary is stored on the process heap. To keep track of when to decrement the reference counter of the off-heap binary, a linked list (the MSO - mark and sweep object list) containing funs and externals as well as off-heap binaries is woven through the heap. After a garbage collection is done, the MSO list is swept and any off-heap binary that does not have a move marker written into the header words has its reference decremented and is potentially freed.

All items in the MSO list are ordered by the time they were added to the process heap, so when doing a minor garbage collection, the MSO sweeper only has to sweep until it encounters an off-heap binary that is on the old heap.

Virtual Binary heap

Each process has a virtual binary heap associated with it that has the size of all the current off-heap binaries that the process has references to. The virtual binary heap also has a limit and grows and shrinks depending on how off-heap binaries are used by the process. The same growth and shrink mechanisms are used for the binary heap and for the term heap, so first a Fibonacci like series and then 20% growth.

The virtual binary heap exists in order to trigger garbage collections earlier when potentially there is a very large amount of off-heap binary data that could be reclaimed. This approach does not catch all problems with binary memory not being released soon enough, but it does catch a lot of them.


Messages can become a part of the process heap at different times. This depends on how the process is configured. We can configure the behaviour of each process using process_flag(message_queue_data, off_heap | on_heap) or we can set a default for all processes at start using the option +hmqd.

What do these different configurations do and when should we use them? Let’s start by going through what happens when one Erlang process sends a message to another. The sending process needs to do a couple of things:

  1. calculate how large the message to be sent is
  2. allocate enough space to fit the entire message
  3. copy the message payload
  4. allocate a message container with some meta data
  5. insert the message container in the receiver process’ message queue

The process flag message_queue_data, of the receiver process, controls the message allocating strategy of the sender process in step 2 and also how the message data is treated by the garbage collector.

The procedure above is different from how it worked prior to 19.0. Before 19.0 there was no configuration option, the behaviour was always very similar to how the on_heap option is in 19.0.

Message allocating strategies

If set to on_heap, the sending process will first attempt to allocate the space for the message directly on the young heap block of the receiving process. This is not always possible as it requires taking the main lock of the receiving process. The main lock is also held when the process is executing. The possibility for a lock conflict is thus likely in an intensely collaborating system. If the sending process cannot acquire the main lock, a heap fragment is instead created for the message and the message payload is copied onto that. With the off_heap option the sender process always creates heap fragments for messages sent to that process.

There are a bunch of different tradeoffs that come into play when trying to figure out which of the strategies you want to use.

Using off_heap may seem like a nice way to get a more scalable system as you get very little contention on the main locks, however, allocating a heap fragment is more expensive than allocating on the heap of the receiving process. So if it is very unlikely that contention will occur, it is more efficient to try to allocate the message directly on the receiving process’ heap.

Using on_heap will force all messages to be part of on the young heap which will increase the amount of data that the garbage collector has to move. So if a garbage collection is triggered while processing a large amount of messages, they will be copied to the young heap. This in turn will lead to that the messages will quickly be promoted to the old heap and thus increase its size. This may be good or bad depending on exactly what the process does. A large old heap means that the young heap will also be larger, which in turn means that less garbage collections will be triggered while processing the message queue. This will temporarly increase the throughput of the process at the cost of more memory usage. However, if after all the messages have been consumed the process enters a state where a lot less messages are being received. Then it may be a long time before the next fullsweep garbage collection happens and the messages that are on the old heap will be there until that happens. So while on_heap is potentially faster than the other modes, it uses more memory for a longer time. This mode is the legacy mode which is almost how the message queue was handled before Erlang/OTP 19.0.

Which one of these strategies is best depends a lot on what the process is doing and how it interacts with other processes. So, as always, profile the application and see how it behaves with the different options.

[1]: C. J. Cheney. A nonrecursive list compacting algorithm. Commun. ACM, 13(11):677–678, Nov. 1970.

[2]: D. Ungar. Generation scavenging: A non-disruptive high performance storage reclamation algorithm. SIGSOFT Softw. Eng. Notes, 9(3):157–167, Apr. 1984.

Learn more about our work with Erlang and Elixir.


Of communities and bikesheds

So, this morning a new Erlang package building tool was announced. I happened to be reading the erlag-questions mailing list (a fairly rare occurrence, as we’ll get into) and I saw the announcement. As soon as I saw the name of the project, I decided to ignore the thread. However, that thread soon re-connected with me via 2 IRC channels, a Slack channel and Twitter. The project’s name? Coon.

Now, having grown up in Ireland, I was unfamiliar with the word, or the racist connotations. Only since moving back to the US have I been introduced to the surprisingly large lexicon of American racism that was not mentioned in ‘To Kill a Mockingbird’ or ‘Huckleberry Finn’. Thus, given that the author didn’t seem to be a native English speaker, and certainly not someone expected to be familiar with derogatory American slang, I expected someone to politely point this out and for the author to realize they’d made a terrible mistake and rename it.

Well, at least the first part happened.

About now is the time to mention why I don’t regularly follow the erlang-questions mailing list anymore. Many years ago, when I was new to Erlang, I was an avid reader of the mailing list. However, over time something changed. I’m not sure if I simply became proficient enough with the language or if the tone of the mailing list changed as the community grew, but I began to lose patience with the threads on naming and API design that would always grow out of all proportion to their importance while deep, technical discussions would often be overshadowed. For the most part this was just annoying, but harmless and I gradually drifted away from paying close attention to it.

Today however, things are a little different. There’s yet another naming discussion, and people are adding their opinions to a dog-pile of a thread faster than you can read the responses, but this time it’s about the accidental use of a racist slur as a project name.

Now, let’s remember, this is a programming language community. These communities are supposed to help practitioners of the language, advocate for its use and generally be a marketing and outreach platform to encourage people to use it. There are a lot of programming languages these days and developer mindshare is valuable, especially for an oddball language like Erlang. And while it is true that communities are not always (or maybe even often) inclusive or welcoming, surely programming communities should be.

Instead the thread (and I confess to having not read the bulk of it) devolved into arguments around intent vs effect and appeals that other problematic project names had flown under the radar in the past. I’m sorry, but this is not how it works. When you create something and release it into the world, you lose control of the interpretation that thing takes on. I’ve seen cases of authors, reviewing their work in a school curriculum where their work is analyzed vehemently disagree with the interpretation of their creation. It’s easy to forget that building things, naming things, etc are as much, if not more, about the effect produced in the consumer of that work as it is about the author’s intent. You don’t get to say “That’s not what I meant” when someone points out a problem with what you’ve done; you need to examine the effect and determine if you feel you should correct it. This is your responsibility as a member of a community and if you’re hurting inclusively or diversity then you are not being a good member of that community.

When I visited ‘coonhub’, the associated website for the tool that lists available packages, I saw one of my own projects prominently featured. Given that I am not a member of a group to which the derisory term applies, I didn’t expect to feel anything, but instead I felt ashamed that I, however indirectly and involuntarily was lending support to this. I can’t imagine what it feels like for someone to whom the slur has been applied, but the faint echo I encountered was unpleasant enough to give me pause.

Long story short, I hope the Erlang community can pull its head out of its ass long enough to realize that bikeshedding about something like this is bordering on the obscene and should shut that shit down. The original author should recognize their mistake, sacrifice their beloved ‘coonfig.json’ pun, rename the project and everyone should move on. A 50 email thread on the matter is ridiculous and is not appropriate.


Getting started: Building the core dev team

This is an excerpt from The route to the successful adoption of non-mainstream programming languages by Francesco Cesarini and Mike Williams.

On one end of the scale, we often hear complaints about how hard it is to find Erlang developers. Java, C++ and Web developers are everywhere. They are easy to recruit. Not all might be top notch, but at least we can recruit them. Despite that, look at Ericsson who use Erlang in projects with over a hundred developers in each. Members join and leave the team, move to other Erlang or non-Erlang projects and can easily be swapped around.

Ericsson had to start somewhere; in the early days, everyone on the internal Erlang mailing list knew each other personally. Today, their Erlang programmer count is in the thousands and those using it daily are in the hundreds. But they are not alone. The reason you might not hear from companies such as Klarna, OpenX, AlertLogic, WhatsApp, Cisco and bet365 (to mention but a few) is that they just get on with it. They are too busy building teams to complain how hard it is to find Erlang developers.

We have seen Erlang used by teams of a hundred developers who produced successful products, we have also seen teams of two or three developing, deploying and maintaining a huge code base. Not to mention companies who adopt non mainstream languages through acquisitions, successfully introducing the technology in the wider organisation instead of making the crazy decision to rewrite the product or service in Java.

But before you start thinking of hundreds, start small.

Start with a few experienced leaders

Like all programming technologies, you must have one or two experienced, good software developers who can lead the way. Like everything in life, even a programming language can be used and abused. You need people who will stop you from making the wrong decision and avoid mistakes others have done.

Create a small team for the project initiation. Fill it with a few highly experienced experts who will work to shape a plan and get the change sponsors to a common level of understanding.

They have to be enthusiastic, but also make sure they understand and embrace the programming model.

Those who were around in the very early days at the time when C++ and OO was all the hype might remember that we spent a lot of our time convincing people that doing OO programming with Erlang was not a good idea, and that, if unwilling to make the move to functional and concurrency oriented programming, they were better off sticking to C++ or Java. No technology (or varying levels of enthusiasm) can replace experienced software developers who know what they are doing.

While you’re cherry-picking these leaders among men, see if you can seed one of them as a representative on the steering board. Their experience and enthusiasm will be an asset when it comes to changing hearts and minds across the organisation.

Developer vs programmer

We use the term “software developer” rather than “programmer”, because software development entails much more than programming. When building your team, you need people who understand and can support the product life cycle, combined with subject matter experts who understand the industry vertical you are active in. In small teams these may well be the same people.

Software development entails much more than programming.

You need to ensure you have developers who can define, program, build, test, deploy, document, package, write user manuals, and maintain your system.


You do not want to rely on a lonely hero programmer who does not understand the value of delegation and knowledge sharing, believing that DevOps means being woken up in the middle of the night to address customer complaints about the system not working (and then receiving the employee of the month award for cleaning up the mess they created themselves).

Your operations team should be the ones noticing there is a problem through automated alerts, not your customers!

While you might need at least one hero programmer in your team setting the pace, they should never be left alone. Pair them up with a finisher. If you are struggling to find experienced Erlang developers, pick the Erlang champion in your organization and pair them up with a contractor or consultant.

Together with a very small team of two to five, get them started on a prototype. It will allow you (and them) to validate their ideas on a small scale and make mistakes which will not result in the project failing.

Interested in learning more about the fundamentals of building a team that will last? (You should be). Learn how to successfully deploy new technology and teams for the long term from myself and Mike Williams, a seasoned Erlang veteran who has introduced to language into numerous industries, including online gambling and betting.

Subscribe to our newsletter for the latest Erlang and Elixir news, or contact us directly for more on how to introduce a new technology to your stack.


Confounding Beginner Question: What is an Erlang Atom and Why is it Useful?

Like other Erlangers, I tend to take the atom data type for granted. Coming from another language, however, you might be puzzled at why we have all these little strings that aren’t really strings.

The common definition you’ll hear most frequently is something like:

An atom is a label. Its only meaning is itself.

Well, that’s true, but that also sounds a bit useless to someone coming from Python or R or JavaScript or whatever. So let’s break that down: what is a “label” useful for in programs?

  • Variable names are labels.
  • Function names are labels.
  • Module names are labels.
  • The strings you use as keys in a key/value data structure are labels.
  • The enums and label macros you might use in C for semantically significant internal values are almost exactly like atoms

OK, so we use labels all the time, why don’t any of those other languages have atoms, though? Let’s examine those last two reasons for a moment for a hint why.

In Python strings are objects and while building them is expensive, hashing them can be done ahead of time as a cached operation. This means comparing two strings of arbitrary length for equality is extremely cheap, because it is reduced to a large integer comparison for equality. This is not true in, say, C or Erlang or Lisp unless you build your own data structure to carry around the pre-hashed data. In Python it is simple enough to say:

if 'foo' in some_dict:
  # stuff
  # other stuff

In C, however, string comparison is a bit of a hassle and dealing with string data in a cross-platform environment at all can be super annoying depending the age of the systems you might be interacting with or running/building your code on. In Erlang the syntax of string comparison is super simple, but the overhead is not pre-paid like in Python. So what is the alternative?

We use integer values to represent keys that are semantically meaningful to the program at the time it is written. But integers are hard to remember, so instead of having magic numbers floating all around the place we typically have semantically significant integer values aliased from a text label as a macro. This is helpful so that I don’t have to remember the meaning of code like

if (condition == 42) launch_missiles();
if (condition == 86) eat_kittens();

Instead I can write code like:

#define UNDER_ATTACK    42

if (condition == UNDER_ATTACK)    launch_missiles();
if (condition == VILE_UNDERBEAST) eat_kittens();

It is extremely common in programs to have variables or arguments like condition in the above example. It doesn’t matter whether your language has matching (like Erlang, Rust, logic languages, etc.) or uses explicit conditionals like the fake C example above — there will always be a huge number of micro datatypes that carry great semantic significance within your program and only within your program and it is as useful to be able to label these enumerated values in a way that the human coders can understand and remember as it is useful for the computer to be able to compare them as simple integers instead of going to the trouble of string comparison every time your code needs to make a decision (because string comparison entails an arbitrarily long sequence of integer comparisons every single time you compare two strings).

In C we use those macros like above (well, not always; C actually does have super convenient enums that work a lot like atoms, but didn’t when I started using it as a kid in the stone age). In Erlang we just use an atom right there in place. You don’t need a declaration or definition anywhere, the runtime just keeps track of these things for you.

Underneath the hood Erlang maintains a running table of atom label values and translates them to integer values on the way into the system and on the way out of the system. The integer each atom actual resolves to is totally unimportant to you, so Erlang abstracts that detail away, but leaves the machine comparing integer values instead of doing full-string comparisons all over the place.

“But Erlang maps don’t do string comparisons on keys!” you might say.

And indeed, you would be right. Because map keys might be any arbitrary value each key is hashed on the way in, and every time keys are compared the comparing term is hashed the same way, so the end comparison is super fast, but we have to hash the input value first for it to mean anything. With atoms, though, we have a shortcut, because we already know they are both unambiguous integer values throughout the system, and this is a slight win over having to hash first before comparing keys.

In other situations where the comparison values cannot be hashed ahead of time, like function-head matching, however, atoms are a huge win over string comparisons:

-export([foo/1, bar/1]).

foo("Some string value that I don't really recall") ->
    {ok, 1};
foo("Some string value that I don't really care about") ->
    {ok, 2};
foo("Where is my cheeseburger?") ->
    {ok, 3};
foo(_) ->
    {error, wonky_input}.

bar(dont_recall) ->
    {ok, 1};
bar(dont_care) ->
    {ok, 2};
bar(cheeseburger) ->
    {ok, 3};
bar(_) ->
    {error, wonky_input}.

I’ve slowed the clockspeed of the system so that we can notice any difference here in microseconds.

1> timer:tc(fun() -> atoms:foo("Some string value that I don't really care about.") end).
2> timer:tc(fun() -> atoms:foo("Where is my cheeseburger?") end).
3> timer:tc(fun() -> atoms:foo("arglebargle") end).
4> timer:tc(fun() -> atoms:bar(dont_care) end).
5> timer:tc(fun() -> atoms:bar(cheeseburger) end).                                      
6> timer:tc(fun() -> atoms:bar(arglebargle) end).                                        

See what happened? The long string that varies only at the tail end from two options in the function head takes 16 microsecond to compare and return a value. The string that differs at the head is evaluated as a bad match for the first two options the moment the very first character is compared. The total mismatch is our fastest return because that string never need be traversed even a single time to know that it doesn’t match any of the available definitions of foo/1. With atoms, however, we see a pretty constant speed of comparison. That speed would not change at all even if the atoms were a hundred characters long in text, because underneath they are all just integer values.

Now take a look back and consider the return values defined for foo/1 and bar/1. They don’t return naked values, they return pairs where the first member is an atom. This is a pretty common technique in Erlang when writing either a library intended for 3rd party use or when defining functions that have side-effecty operations that might fail (here we have pure functions, but I’m just using this as an example). Remember, the equal sign in Erlang is both an assignment operator and an assertion operator, when calling a function that nests its return values you have the freedom to decide whether to crash the current process on an unexpected value or to handle the “error” (in which case for your program it becomes an expected condition and not an exception).

blah(Condition) ->
    {ok, Value} = foo(Condition),

The code above will crash if the tuple {error, wonky_input} is returned, because the expected atom 'ok' does not match the actually returned atom ‘error’.

bleh(Condition) ->
   case foo(Condition) of
       {ok, Value}          -> do_stuff(Value);
       {error, wonky_input} -> get_new_condition()

The code above now does not crash on that error return value and instead moves on to get another condition to try out, because the error tuple matches one of the case conditions that is defined as a return value. All this can happen really fast because atoms comparisons are really integer comparisons, and that means we save a ton of processor time (and space) by avoiding string/list or binary comparisons all over the place.

In addition to atoms being a much nicer and dramatically more flexible version of global enumerated types that let us write code in a more natural style that uses normal-language labels for program semantics, it turns out that function and module names are also atoms. This is a really nice feature in itself, because it allows us to write highly dynamic code with a lot less confusion about what types both sides of a call needs to be as well as making the code easier to read. I can even implement my own version of apply/3:

my_apply(Module, Function, Args) ->

Of course, there is a whole pile of reasons why you will never want to actually write a function like this in a real program, but that’s the sort of power we have without doing any type casting magic, introspection, or on-the-fly modification of our program, references or memory space.

Once you get used to using atoms and matching you’ll really start to miss them in other languages and wonder how you ever got along without them. Now run off and start writing some code to practice thinking with atoms. They will become natural to you before the day is out.


Copyright © 2016, Planet Erlang. No rights reserved.
Planet Erlang is maintained by Proctor.