Using CircleCI for Continuous Integration of Elixir projects.

Continuous integration is vital to ensure the quality of any software you ship. Circle CI is a great tool that provides effective continuous integration. In this blog, we will guide you through how to set up CircleCI for an Elixir project. We’ve set up the project on GitHub so you can follow along. In the project we will:

  • Build the project
  • Run tests and check code coverage
  • Generate API documentation
  • Check formatting
  • Run Dialyzer check.

You can follow the actual example here. Some inspiration for this blog post comes from this blog and this post. Prerequisites: For this project you will need an account on CircleCI and GitHub. You also need to connect the two accounts. Lastly, you will need Erlang/OTP and Elixir installed.

Create an Elixir project

To begin with, we need a project, for this demonstration we wanted to use the most trivial possible example, so we generated a classic ‘Hello World’ program.
To create the project, simply type the following into the shell:

mix new hello_world

We also added a printout, because the generated constant-returning function can be optimised away and that will confuse the code coverage checking.

Add code coverage metric

Code coverage allows us to identify how much of the code is being executed during the testing. Once we know that, we can also understand what lines are not executed because these lines could be where bugs can hide undetected or that we have forgotten to write tests for. If those lines of code are unreachable or not used, they should obviously be removed. To get this metric, I add the excoveralls package to our project (see here for details). Beware that even if you have 100% code coverage, it does not mean the code is bug-free. Here’s an example (inspired by this):

defmodule Fact do

  def fact(1), do: 1
  def fact(n), do: n*fact(n-1)

If the test executes Fact.fact(10), we get 100% test coverage, but the code is still faulty, it won’t terminate if the input is e.g. -1. For a new project, it should be easy to keep the code coverage near 100%, especially if the developers follow test-driven principles. However, for a “legacy” or already established project without adequate test coverage, reaching 100% code coverage might be unreasonably expensive. In this case, we should still strive to increase (or at least not decrease) the code coverage.

To run the tests with code coverage, execute:

mix coveralls.html

In the project root directory (hello_world). Apart from the printout, a HTML report will be generated in the cover directory too.

Add Dialyzer checking

Dialyzer is a static analyzer tool for Erlang and Elixir code. It can detect type errors (e.g. when a function expects a list to be an argument, but is called with a tuple), unreachable code and other kinds of bugs. Although Elixir is a dynamically typed language, it is possible to add type specifications (specs) for function arguments, return values, structs, etc. The Elixir compiler does not check for this, but the generated API documentation uses the information and it is extremely useful for users of your code. As a post-compilation step , you can run Dialyzer to check the type specifications in addition to writing unit-tests, you can regard this type checking as an early warning to detect incorrect input before you start debugging a unit-test or API client. To enable a Dialyzer check for this code, I’ll use the Dialyzer mix task (see this commit for more details). Dialyzer needs PLT (persistent lookup table) files to speed up its analysis. Generating this file for the Elixir and Erlang standard libraries takes time, even on fast hardware, so it is vital to reuse the PLT files between CI jobs.

To run the dialyzer check, execute ‘mix dialyzer’ in the project root directory (hello_world). The output will be printed on the console. The first run (which generates the PLT file for the system and dependencies) might take a long time!

Add documentation generation

Elixir provides support for written documentation. By default this documentation will be included in the generated beam files and accessible from the interactive Elixir shell (iex). However, if the project provides an API for other projects, it is vital to generate more accessible documentation. We’re going to use the mix docs task to generate HTML documentation (see this commit for more details).

To generate documentation, execute:

‘mix docs’

In the project root directory (hello_world). The documentation is generated (by default) in the docs directory.

Ensure code formatting guidelines

Using the same code style consistently throughout a project helps readers understand the code and could prevent some bugs from being introduced. The mix tool in Elixir provides a task to check that the code is in compliance with the specified guidelines. The command to do this is:

mix format --check-formatted

In the project root directory (hello_world).

Push the project to GitHub

Now that the project is created, it needs to be published to allow CircleCI to access it. Create a new repository on GitHub by clicking New on the Repository page, then follow the instructions. When the new repository is created, initialize the git repository and push it to GitHub:

cd hello_world
git init
git add config/ .formatter.exs .gitignore lib/ mix.exs test/
git commit -m “Initial version” -a
git remote add origin <repo-you-created>
git push -u origin master

Integrate the GitHub repository to CircleCI

Login to Click on “Add projects” Click on “Set Up Project” for hello_world For now skip the instructions to create the .circleci/config.yml file (we’ll get back to this file later), just click on “Start Building”

The build will fail, because we didn’t add the configuration file, that’s the next step.

Configuring the CircleCI workflow

As our requirement states above, we’ll need 5 jobs. These are:

  • Build: Compile the project and create the PLT file for Dialyzer analysis.
  • Test: Run the tests and compute code coverage.
  • Generate documentation: Generate the HTML and associated files that document the project.
  • Check code format: The mix tool can be used to ensure that the project follows the specified code formatting guidelines.
  • Execute Dialyzer check: run the Dialyzer mix task using the previously generated PLT file

The syntax of the CircleCI configuration file is described here.

The next section describes the configuration required to setup the above five jobs, so create a .circleci/config.yml file and add the following to it:

Common preamble

version: 2.1

The above specifies the current CircleCI version. The jobs (described in the next sections) should be listed in the configuration file.

The build step

The configuration for the build step:

    - image: circleci/elixir:1.8.2
        MIX_ENV: test

    - checkout

    - run: mix local.hex --force
    - run: mix local.rebar --force

    - restore_cache:
        key: deps-cache-{{ checksum "mix.lock" }}
    - run: mix do deps.get, deps.compile
    - save_cache:
        key: deps-cache-{{ checksum "mix.lock" }}
            - deps
            - ~/.mix
            - _build

    - run: mix compile

    - run: echo "$OTP_VERSION $ELIXIR_VERSION" > .version_file
    - restore_cache:
            - plt-cache-{{ checksum ".version_file" }}-{{ checksum "mix.lock" }}
    - run: mix dialyzer --plt
    - save_cache:
        key: plt-cache-{{ checksum ".version_file"  }}-{{ checksum "mix.lock" }}
            - _build
            - deps
            - ~/.mix

In this example, I’m using 1.8.2 Elixir. The list of available images can be found here. The MIX_ENV variable is set to test, so the test code is also built and more importantly, the PLT file for Dialyzer will be built for test-only dependencies too.

Further build steps:

The actual build process checks the code, fetches and installs hex and rebar locally.

Restore the dependencies from cache. The cache key depends on the checksum of the mix.lock file which contains the exact versions of the dependencies, so this key changes only when actual dependencies are changed. The dependencies and built files are saved to the cache, to be reused in the test and documentation generating steps, and later when CI runs.

Build the actual project. Unfortunately, this result cannot be reused, because the checkout steps produce source files with current timestamp, so in later steps, the source files will have newer timestamps than the beam files generated in this step, this will lead to mix compiling the project anyway.

The dialyzer PLT file depends on the Erlang/OTP and Elixir versions. Even though I’ve fixed the Elixir version, it is possible that the Erlang/OTP in the Docker image is updated and in that case, the PLT file would be out of date. As the CircleCI caches are immutable, there’s no way to update the PLT file, for these cases, you’ll need a new cache name for new cache contents. Unfortunately, not all environment variable names can be used in CircleCI cache names, so I needed to use a workaround here: create a temporary .version_file which contains the Erlang/OTP and Elixir versions and use its checksum in the cache name along with the checksum of the mix.lock file (which contains the exact versions of all dependencies). So as long as we have the exact same dependencies and versions, we can reuse the PLT file safely, but as soon as anything changes, we get to use a new PLT file.

The test step

The configuration for the test step:

    - image: circleci/elixir:1.8.2

    - checkout
    - restore_cache:
        key: deps-cache-{{ checksum "mix.lock" }}
    - run: mix coveralls.html

    - store_artifacts:
        path: cover
        destination: coverage_results

Obviously, you need to use the same docker image as you use in the build step. There’s no need to explicitly configure the MIX_ENV environment variable because the mix job will set it. The test is fairly straightforward: checkout the code from the repository, fetch the dependencies from the cache, then run the coverage check job. The coverage report is generated in the cover directory. It is stored as an artifact, so you can check the results via a browser on the CircleCI page. If the tests itself fail, the output of the step will show the error message, and the job itself will fail.

The documentation generation step

The configuration for the documentation generation step:

    - image: circleci/elixir:1.8.2
        MIX_ENV: test

    - checkout
    - restore_cache:
        key: deps-cache-{{ checksum "mix.lock" }}
    - run: mix docs

    - store_artifacts:
        path: doc
        destination: documentation

Once again, the same docker from the build step needs to be used and setting up the MIX_ENV variable is important, otherwise, the dependencies might be different from the dev environment to the test environment. The documentation generation is fairly straightforward: checkout the code from the repository, fetch the dependencies from the cache (which contains the documentation generating task), then run the documentation generating the job. The documentation is generated in the doc directory. It is stored as an artifact, so you can check the results via a browser on the CircleCI page.

The dialyzer step

The configuration for the dialyzer step:

    - image: circleci/elixir:1.8.2
        MIX_ENV: test

    - checkout
    - run: echo "$OTP_VERSION $ELIXIR_VERSION" > .version_file
    - restore_cache:
            - plt-cache-{{ checksum ".version_file" }}-{{ checksum "mix.lock" }}
    - run: mix dialyzer --halt-exit-status

Much like the last step, the docker image needs to match the one used for the build. Ensure that the MIX_ENV variable is correct. The workaround mentioned above is required to find the cache with the right PLT file, then executing Dialyzer is a simple command. If Dialyzer finds an error, it will return with a non-zero exit code, so the step will fail.

The format checking step

The configuration for the format checking step:

    - image: circleci/elixir:1.8.2
        MIX_ENV: test

    - checkout

    - run: mix format --check-formatted

This is really simple, we don’t even need any cached files, just run the check.

Piecing it all together

CircleCI executes a workflow which contains the above steps. This is the configuration for the workflow:

  version: 2
    - build
    - format_check:
            - build
    - generate_documentation:
            - build
    - dialyzer:
            - build
    - test:
            - build

The workflow specifies that the four later steps depend on the build test, but they can be executed simultaneously. The whole configuration file can be seen on GitHub. When the configuration file is ready, add it to git:

git add .circleci/config.yml

And push it to GitHub:

git push origin master

CircleCI will automatically detect that a new version was committed and will execute the configured jobs.


Continuous integration is essential in a fast moving project. The developers need feedback as soon as possible because the earlier a bug is found, the earlier and cheaper it is to fix it. This blog presents a simple, but effective setup that can run this vital part of an Elixir project. Want more fantastic Elixir inspiration? Don’t miss out on Code Elixir, a day to learn, share, connect with and be inspired by the Elixir community. Want to learn about live tracing in Elixir? Head to our easy to follow guide.


Elixir v1.9 released

Elixir v1.9 is out with releases support, improved configuration, and more.

We are also glad to announce Fernando Tapia Rico has joined the Elixir Core Team. Fernando has been extremely helpful in keeping the issues tracker tidy, by fixing bugs and improving Elixir in many different areas, such as the code formatter, IEx, the compiler, and others.

Now let’s take a look at what’s new in this new version.


The main feature in Elixir v1.9 is the addition of releases. A release is a self-contained directory that consists of your application code, all of its dependencies, plus the whole Erlang Virtual Machine (VM) and runtime. Once a release is assembled, it can be packaged and deployed to a target as long as the target runs on the same operating system (OS) distribution and version as the machine running the mix release command.

Releases have always been part of the Elixir community thanks to Paul Schoenfelder’s work on Distillery (and EXRM before that). Distillery was announced in July 2016. Then in 2017, DockYard hired Paul to work on improving deployments, an effort that would lead to Distillery 2.0. Distillery 2.0 provided important answers in areas where the community was struggling to establish conventions and best practices, such as configuration.

At the beginning of this year, thanks to Plataformatec, I was able to prioritize the work on bringing releases directly into Elixir. Paul was aware that we wanted to have releases in Elixir itself and during ElixirConf 2018 I announced that releases was the last planned feature for Elixir.

The goal of Elixir releases was to double down on the most important concepts provided by Distillery and provide extensions points for the other bits the community may find important. Paul and Tristan (who maintains Erlang’s relx) provided excellent feedback on Elixir’s implementation, which we are very thankful for. The Hex package manager is already using releases in production and we also got feedback from other companies doing the same.

Enough background, let’s see why you would want to use releases and how to assemble one.

Why releases?

Releases allow developers to precompile and package all of their code and the runtime into a single unit. The benefits of releases are:

  • Code preloading. The VM has two mechanisms for loading code: interactive and embedded. By default, it runs in the interactive mode which dynamically loads modules when they are used for the first time. The first time your application calls, the VM will find the Enum module and load it. There’s a downside. When you start a new server in production, it may need to load many other modules, causing the first requests to have an unusual spike in response time. Releases run in embedded mode, which loads all available modules upfront, guaranteeing your system is ready to handle requests after booting.

  • Configuration and customization. Releases give developers fine grained control over system configuration and the VM flags used to start the system.

  • Self-contained. A release does not require the source code to be included in your production artifacts. All of the code is precompiled and packaged. Releases do not even require Erlang or Elixir in your servers, as they include the Erlang VM and its runtime by default. Furthermore, both Erlang and Elixir standard libraries are stripped to bring only the parts you are actually using.

  • Multiple releases. You can assemble different releases with different configuration per application or even with different applications altogether.

  • Management scripts. Releases come with scripts to start, restart, connect to the running system remotely, execute RPC calls, run as daemon, run as a Windows service, and more.

1, 2, 3: released assembled!

You can start a new project and assemble a release for it in three easy steps:

$ mix new my_app
$ cd my_app
$ MIX_ENV=prod mix release

A release will be assembled in _build/prod/rel/my_app. Inside the release, there will be a bin/my_app file which is the entry point to your system. It supports multiple commands, such as:

  • bin/my_app start, bin/my_app start_iex, bin/my_app restart, and bin/my_app stop - for general management of the release

  • bin/my_app rpc COMMAND and bin/my_app remote - for running commands on the running system or to connect to the running system

  • bin/my_app eval COMMAND - to start a fresh system that runs a single command and then shuts down

  • bin/my_app daemon and bin/my_app daemon_iex - to start the system as a daemon on Unix-like systems

  • bin/my_app install - to install the system as a service on Windows machines

Hooks and Configuration

Releases also provide built-in hooks for configuring almost every need of the production system:

  • config/config.exs (and config/prod.exs) - provides build-time application configuration, which is executed when the release is assembled

  • config/releases.exs - provides runtime application configuration. It is executed every time the release boots and is further extensible via config providers

  • rel/vm.args.eex - a template file that is copied into every release and provides static configuration of the Erlang Virtual Machine and other runtime flags

  • rel/ and rel/env.bat.eex - template files that are copied into every release and executed on every command to set up environment variables, including ones specific to the VM, and the general environment

We have written extensive documentation on releases, so we recommend checking it out for more information.


We also use the work on releases to streamline Elixir’s configuration API. A new Config module has been added to Elixir. The previous configuration API, Mix.Config, was part of the Mix build tool. However, since releases provide runtime configuration and Mix is not included in releases, we ported the Mix.Config API to Elixir. In other words, use Mix.Config has been soft-deprecated in favor of import Config.

Another important change related to configuration is that mix new will no longer generate a config/config.exs file. Relying on configuration is undesired for most libraries and the generated config files pushed library authors in the wrong direction. Furthermore, mix new --umbrella will no longer generate a configuration for each child app, instead all configuration should be declared in the umbrella root. That’s how it has always behaved, we are now making it explicit.

Other improvements

There are many other enhancements in Elixir v1.9. The Elixir CLI got a handful of new options in order to best support releases. Logger now computes its sync/async/discard thresholds in a decentralized fashion, reducing contention. EEx (Embedded Elixir) templates support more complex expressions than before. Finally, there is a new ~U sigil for working with UTC DateTimes as well as new functions in the File, Registry, and System modules.

What’s next?

As mentioned earlier, releases was the last planned feature for Elixir. We don’t have any major user-facing feature in the works nor planned. I know for certain some will consider this fact the most excing part of this announcement!

Of course, it does not mean that v1.9 is the last Elixir version. We will continue shipping new releases every 6 months with enhancements, bug fixes and improvements. You can see the Issues Tracker for more details.

We also are working on some structural changes. One of them is move the mix xref pass straight into the compiler, which would allow us to emit undefined function and deprecation warnings in more places. We are also considering a move to Cirrus-CI, so we can test Elixir on Windows, Unix, and FreeBSD through a single service.

It is also important to highlight that there are two main reasons why we can afford to have an empty backlog.

First of all, Elixir is built on top of Erlang/OTP and we simply leverage all of the work done by Ericsson and the OTP team on the runtime and Virtual Machine. The Elixir team has always aimed to contribute back as much as possible and those contributions have increased in the last years.

Second, Elixir was designed to be an extensible language. The same tools and abstractions we used to create and enhance the language are also available to libraries and frameworks. This means the community can continue to improve the ecosystem without a need to change the language itself, which would effectively become a bottleneck for progress.

Check the Install section to get Elixir installed and read our Getting Started guide to learn more. We have also updated our advanced Mix & OTP to talk about releases. If you are looking for a more fast paced introduction to the language, see the How I Start: Elixir tutorial, which has also been brought to the latest and greatest.

Have fun!


Help Dialyzer Help You!

…or Why you should use specs if you use opaque types

Following the steps from Devon and Stavros, I wanted to write this article to highlight a not so obvious dialyzer lesson about opaque types and specs…

Help me help you — Jerry Maguire


For the impatient ones…

If you define an opaque type, you have to add specs to all the exported functions that use it (i.e. your module’s API).

Opaque Types

Since this article is about opaque types, I will do a quick intro first…

In Elixir, there are 3 ways to specify a user-defined type:

@type t1 :: boolean | atom # this type is exported
@typep t2 :: String.t # this type is private
@opaque t3 :: t1 | t2 # this type is opaque

The equivalent in Erlang is slightly more verbose…

-type t1 :: boolean() | atom().
-export_type [t1/0]. % This makes t1 exported
t2 :: string:t().
-opaque t3 :: t1 | t2.

Private types can only be used within the module that defines them. Exported types can be used anywhere and if you use them outside the module that defines them you have to use their fully qualified names (e.g. String.t, MyMod.my_type, etc.).

Opaque types are just like exported types in the sense that you can use them from outside of the module where you define them. But there is a subtle difference: You are not supposed to use the definition of an opaque type outside its module.

Check, for instance, the docs for HashSet.t(): there is only the name of the type there and that’s intentional. The docs won’t tell you how that type is implemented and that’s because you should treat those things as black-boxes. You’re not supposed to deconstruct or pattern-match a HashSet.t(), you’re supposed to use the functions in the HashSet module to work with it.

For comparison, check the types in the String module. There, all exported types expose their internal structure and that’s intentional again. The idea here is that you are more than allowed to pattern-match on them.

The internal representation of HashSet.t may eventually change and, since you never knew it, your code will still work. String.t, on the other hand, is not expected to ever change and you can benefit from the fact that it’s implemented as a binary() to write your code.

Dialyzer and Opaque Types

Now, opaque types (and types in general) are barely checked by the compiler (only with the right options will it warn you if you are using an unexistent private type, and that’s all). To validate that you’re actually respecting the rules stated above (i.e. not deconstructing instances of opaque types outside of their modules) you need to use Dialyzer.

Dialyzer (through, for instance, dialyxir in Elixir) will check your code and warn you if you ever break the opaqueness of a term. But there is a catch: Dialyzer can’t work alone. You have to help it do its job, as you will see below…

The Setup

My discovery of how to help dialyzer here begun with two very large modules that I have reduced considerably for you. They’re boring now, but they were very large and full of functions originally…

Well, I run dialyzer on my project and sure enough, I got these very very clear warnings…

lib/dialyzer_example.ex:19: Function print_odt/1 has no local return
lib/dialyzer_example.ex:19: The call 'Elixir.MyODT':f1(Vodt@1::#{'f2':=_, _=>_}) does not have an opaque term of type 'Elixir.MyODT':t() as 1st argument

What’s going on here?

OK. As it usually happens with dialyzer… I had many questions, but I knew…

Dialyzer is NEVER wrong

So… Let’s see if we can figure this out because, as Sean so brilliantly expressed on his latest talk at CodeBeamSF, it must be just a little misunderstanding between me and dialyzer.

What dialyzer says

So, let’s do the obvious thing first… dialyzer says that my call to MyODT.f1/1 doesn’t have a proper MyODT.t argument. What I am using as an argument to that function is odt, a variable that, according to the spec I wrote for MyODTUser.print_odt is actually an instance of MyODT.t 🤔

Dialyzer also says that MyODTUser.print_odt will never return, but that’s likely because it’s considering the other discrepancy. If I fix that one, I’ll remove both of them at once.

What dialyzer MEANS

If you check Stavros talk (video below) you’ll learn that dialyzer works by inferring the broader possible type for each variable and emitting warnings when it can’t infer any possible type for one.

With that in mind, and since it’s complaining about odt, let’s try to figure out what dialyzer has inferred as its success type.

Actually, we don’t have to go too far for that. It’s in the warning itself: Vodt@1::#{'f2':=_,_=>_}. As you might have noticed Vodt@1 is just the Erlang representation for the variable odt and #{'f2':=_,_=>_}is its type.

That map is somewhat similar to our opaque type MyODT.t, but not quite… since it allows maps to have any keys and values, as long as they have a field called f2 and MyODT.t only allows f1 and f2 as keys (and both of them are required).

How could dialyzer found such a type for odt then? Well… let’s try to see what information was available when it was inferring the types.

There is a typespec for print_odt/1, but dialyzer only uses typespecs to narrow down the success types once they’re found. Which is not this case, so… that spec wasn’t on dialyzer’s mind at the time of the warning.

The only other info available was the fact that odt was used to call both MyODT.f1/1 and MyODT.f2/1. And that’s the key to solve our mystery! Because if you check the code for that module, you will notice that MyODT.f1/1 has a spec, but MyODT.f2/1 hasn’t.

Not having a spec for MyODT.f2/1, dialyzer does its best and figures out the type of odt has to be the success typing of the argument of that function (i.e. #{'f2':=_,_=>_}). And that type is not opaque. Since there is no spec that says so, there is no way for dialyzer to tell that f2 actually requires an instance of MyODT.t and not any map with an f2 key.

Then, when dialyzer tries to match that type against the success typing of the argument of MyODT.f1/1 (i.e. MyODT.t)… 💥 … There is no way to match a random map type against that opaque type. As a matter of fact, the only type that matches with an opaque type is that same opaque type. That’s the whole point. Even if you manage to build something that looks like the definition of the opaque type if dialyzer can’t prove that it is, in fact, the expected opaque type it will emit a warning. In other words: we are violating the opaqueness of that argument.

Simply adding a spec to MyODT.f2/1 removes both warnings. And that leads us again to the lesson of the day:

If you define an opaque type, you have to add specs to all the exported functions that use it (i.e. your module’s API).

What I would LIKE dialyzer to say

One day, someone will finish what Elli Fragkaki once started and dialyzer will tell us something along the lines of…

lib/dialyzer_example.ex:19: Function print_odt/1 has no local return
lib/dialyzer_example.ex:19: The call to 'Elixir.MyODT':f1/1 requires an opaque term of type 'Elixir.MyODT':t() as 1st argument and the variable that you're using for it (Vodt@1) must have type #{'f2':=_, _=>_} since it's also used in a call to 'Elixir.MyODT':f2/1

…or something even clearer and more helpful!

In Other News…


Like every year since 2017, SpawnFest is coming!

This year it will happen on September 21st & 22nd.

Spawnfest is an annual 48-hour FREE online development competition for all beamers! You can build teams of up to four people and you’ll have 48 hours (a weekend) to create the best BEAM-based applications you can.

You can win some amazing prizes provided by our sponsors. Did I mention it’s FREE and ONLINE (i.e. you can play in your pajamas)?

Registration is open and you can either build a team yourself or register as an independent developer and our mystical algorithm™️ will help you find a great team.

We’re also looking for sponsors. If your company provides a service or a product and wants to give some of it as a prize for the winners, just like DigitalOcean did… please point them our way :)


ElixirConf is coming to Latin America for the first time!

Thanks to our friends from PlayUS Media, we’ll meet on Medellín, Colombia for ElixirConfLA on October 24th & 25th.

We already have an amazing speaker lineup, including Verónica Lopez, Andrea Leopardi, Carlos Andres Bolaños, Francesco Cesarini, Mariano Guerra, Carolina Pascale Campos, Milton Mazzani and Simón Escobar, and the CFT is open until July 19th.

It will be a great event that you don’t want to miss.

You can still get Very Early Bird Tickets.

Erlang Battleground

As usual, a reminder: This publication is still looking for writers. If you want to join us, just get in touch with me (Brujo Benavides) and I’ll add you.

Help Dialyzer Help You! was originally published in Erlang Battleground on Medium, where people are continuing the conversation by highlighting and responding to this story.


The problem of connecting to Smart Meters was solved 30 years ago! Powered by scalable technology.

Electricity Utilities around the globe are starting to undergo a radical transformation to address a range of economic, environmental and technical challenges. Global drivers for change are rising emissions, domestic energy resource constraints, growing demand for electricity, financing constraints for new generation assets development, cost of electricity, and ageing infrastructure. In addition, the increasing prominence of electric vehicles (EVs) and battery storage at end-users is underlining the need for a sophisticated grid.

Key to understanding the smart grid is smart metering. According to GlobalData, global smart meter installations were 88.2 million in 2017, and these are expected to grow to over 588 million units installed in 2022. China has led the market with 406.9 million smart meter installations to the end of 2017, the US and Japan followed with 38.7 million and 36.5 million smart meters installation respectively.

The total number of data messages may be low today, but as the smart grid continues to roll out, systems will need to scale to billions of messages and be fault tolerant. For instance, each meter will need to be provisioned onto the system, it needs to link with billing, control platforms, data analysis and more. We’re only just beginning to imagine how this data can be used to enable new innovations in the industry. The future of smart meters could empower and transform sustainability, healthcare and peer-to-peer power sharing.

Luckily, the problem of how to connect and control billions of devices was solved 30 years ago in the 1980s by the team at Ericsson who developed Erlang. It’s a language that was built specifically for fault-tolerance, high availability and concurrently running for critical systems. While it was built for the telecommunications industry, it’s now used across a range of industry leaders in betting, health, advertising, online gaming, anywhere that high availability and fault-tolerance is pivotal.

Erlang’s concurrency, its no-shared memory architecture and built-in ‘fail and recover’ approach makes it behave extremely gracefully and predictably under highly variable stochastic load. Erlang excels in handling message explosion and multiplexing – the generation of a cascade of messages out to individual users starting from a single event – a message cascade that can span hundreds of servers in a coherent way that maintains message delivery order. Erlang is the foundation for EMQ X MQTT distributed message broker for all major IoT protocols as well as M2M, NB-IoT and other mobile applications.

To find out more about our partnership with EMQ and how it can help you, head here.


Altenwald desde 2016 hasta Erlang Solutions 2019

No sabía cómo comenzar a escribir este artículo. Es realmente duro admitir que un proyecto no salió bien y has tenido que volver al paso anterior. No se siente como la casilla de salida, pero casi. Comienzo una nueva etapa en Erlang Solutions, ¿quieres conocer la historia?


Blockchain No-Brainer: Ownership in the Digital Era


Seven months of intense activity have passed since the release of Blockchain 2018 Myth vs. Reality article. As a follow-up to that blog post, I would like to take the opportunity to analyse in further detail the impact that this new technology has on our perceptions of asset ownership and value, how we are continuously exploring new forms of transactional automation and then conclude with the challenge to deliver safe and fair governance.

Since the topic to cover is vast, I have decided to divide it into two separate blog posts, the first of which will cover how the meaning and perception of ownership is changing, while the second will discuss how Smart Contract automation can help deliver safe, fair, fast, low-cost, transparent and auditable transactional interoperability.

My intention is to provide an abstract and accessible summary that describes the state-of-the-art of blockchain technology and what motivations have led us to the current development stage. While these posts will not focus on future innovation, they will serve as a prelude to more bold publications I intend to release in the future.

Digital Asset Ownership, Provenance and Handling

How we value Digital vs. Physical Assets

In order to understand how the notion of ownership is currently perceived in society, I propose to briefly analyse the journey that has brought us to the present stage and the factors which have contributed to the evolution of our perceptions.

Historically people have been predominantly inclined to own and trade physical objects. This is probably best explained by the fact that physical objects stimulate our senses and don’t require the capacity to abstract, as opposed to services for instance. Ownership was usually synonymous with possession.

Let us try to break down and extract the fundamentals of the economy of physical goods: we originally came to this world and nothing was owned by anyone; possession by individuals then gave rise to ownership ‘rights’ (obtained through the expenditure of labour - finding or creating possessions); later we formed organisations that exercised territorial control and supported the notion of ownership (via norms and mores that evolved into legal frameworks), as a form of protection of physical goods. Land and raw materials are the building blocks of this aspect of our economy.

When we trade (buy or sell) commodities or other physical goods, what we own is a combination of the raw material, which comes with a limited supply, plus the human/machine work required to transform it to make it ready to be used and/or consumed. Value was historically based on a combination of the inherent worth of the resource (scarcity being a proxy) plus the cost of the work required to transform that resource into an asset. Special asset classes (e.g. art) soon emerged where value was related to intangible factors such as provenance, fashion, skill (as opposed to the quantum of labour) etc.

We can observe that even physical goods contain an abstract element: the design, the capacity to model it, package it and make it appealing to the owners or consumers.

In comparison, digital assets have a stronger element of abstraction which defines their value, while their physical element is often negligible and replaceable (e.g. software can be stored on disk, transferred or printed). These types of assets typically stimulate our intellect and imagination, as our senses get activated via a form of rendering which can be visual, acoustic or tactile. Documents, paintings, photos, sculptures and music notations have historical equivalents that predate any form of electrically-based analog or digital representations.

The peculiarity of digital goods is that they can be copied exactly at very low cost: for example, they can be easily reproduced in multiple representations on heterogeneous physical platforms or substrates thanks to the discrete nature in which we store them (using a simplified binary format). The perceivable form can be reconstructed and derived from these equal representations an infinite number of times. This is a feature that dramatically influences how we value digital assets. The opportunity to create replicas implies that it is not the copy nor the rendering that should be valued, but rather the original digital work. In fact, this is one of the primary achievements that blockchain has introduced via the hash lock inherent to its data structure.

If used correctly the capacity to clone a digital item can increase confidence that it will exist indefinitely and therefore maintain its value. However, as mentioned in my previous blog post (Blockchain 2018 - Myth vs. Reality) the immutability and perpetual existence of digital goods are not immune from facing destruction, as at present there is a dependence on a physical medium (e.g. hard disk storage) that is potentially subject to alteration, degradation or obsolescence.

A blockchain, such as that of the Bitcoin network, represents a model for vast replication and reinforcement of digital information via so-called Distributed Ledger Technology (DLT). Repair mechanisms can intervene in order to restore integrity in the event that data gets corrupted by a degrading physical support (i.e. a hard disk failure) or a malicious actor. The validity of data is agreed upon by a majority (the level of majority varying across different DLT implementations) of peer-to-peer actors (ledgers) through a process known as consensus.

This is a step in the right direction, although the exploration of increasingly advanced platforms to preserve digital assets is expected to evolve further. As genetic evolution suggests, clones with equal characteristics can all face extinction by the introduction of an actor that makes the environment unfit for survival in a particular form. Thus, it might be sensible to introduce heterogeneous types of ledgers to ensure their continued preservation on a variety of physical platforms and therefore enhance the likelihood of survival of information.

The evolution of services and their automation

In the previous paragraph, we briefly introduced a distinction between physical assets and goods where the abstraction element is dominant. Here I propose to analyse how we have started to attach value to services and how we are becoming increasingly demanding about their performance and quality.

Services are a form of abstract valuable commonly traded on the market. They represent the actions bound to the contractual terms under which a transformation takes place. This transformation can apply to physical goods, digital assets, other services themselves or to individuals. What we trade, in this case, is the potential to exercise a transformation, which in some circumstances might have been applied already. For instance, a transformed commodity, such as refined oil, has already undergone a transformation from its original raw form.

Another example is an artefact where a particular shape can either be of use or trigger emotional responses, such as artefacts with artistic value. Service transformation in the art world can be highly individualistic (depending on the identity of the person doing the transforming (the artist; the critic; the gallery etc) or the audience for the transformed work. Thus, Duchamp’s elevation (or, possibly, degradation) of a porcelain urinal to artwork relied on a number of connected elements (i.e. transformational actions by actors in the art world and beyond) for the transformation to be successful - these elements are often only recognised and/or understood after the transformation has been affected.

Even the rendering from an abstract form, such as with music notation or a record, the actual sound is a type of transformation that we consider valuable and commonly trade. These transformations can be performed by humans or machinery. With the surge of interest in digital goods, there is a corresponding increasing interest in acquiring services to transform them.

As these transformations are being automated more and more, and the human element is progressively being removed, even services are gradually taking the shape of automated algorithms that are yet another form of digital asset, as is the case with Smart Contracts. Note, however, that in order to apply the transformation, an algorithm is not enough, we need an executor such as a physical or virtual machine.

In Part 2 we will analyse how the automation of services has led to the evolution of Smart Contracts, as a way to deliver efficient, transparent and traceable transformations.

Sustainability and Access to resources

Intellectual and imagination stimulation is not the only motivator that explains the increasing interest in digital goods and consequently their rising market value. Physical goods are known to be quite costly to handle. In order to create, trade, own and preserve them there is a significant expenditure required for storage, transport, insurance, maintenance, extraction of raw materials etc.

There is a competitive and environmental cost involved, which makes access to physical resources inherently non-scalable and occasionally prohibitive, especially in concentrated urban areas. As a result, people are incentivised to own and trade digital goods and services, which turns out to be a more sustainable way forward.

For example, let us think about an artist who lives in a densely populated city and needs to acquire a canvas, paint, brushes, and so on, plus studio and storage space in order to create a painting. Finding that these resources are difficult or impossible to access, he/she decides to produce their artwork in a digital form.

Services traditionally require resources to be delivered (e.g. raw material processing). However, a subset of these (such as those requiring non-physical effort, for instance, stock market trading, legal or accounting services) are ideally suited to being carried out at a significantly lower cost via the application of algorithmic automations.

Note: this analysis assumes that the high carbon footprint required to drive the ‘Proof of Work’ consensus mechanism used in many DLT ecosystems can be avoided, otherwise the sustainability advantage can be legitimately debated.

The Generative Approach

The affordable access to digital resources, combined with the creation of consistently innovative algorithms has also contributed to the rise of a generative production of digital assets. These include partial generation, typically obtained by combining and assembling pre-made parts: e.g. Robohash derives a hash from a text added to the URL that leads to a fixed combination of mouths, eyes, faces, body and accessories.

Other approaches involve Neural Net Deep Learning: e.g. ThisPersonDoesNotExist uses a technology known as Generative Adversarial Network (GAN) released by NVidia Research Labs to generate random people faces, Magenta uses a Google TensorFlow library to generate Music and Art, while DeepArt uses a patented neural net implementation based on the 19-layer VGG network.

In the gaming industry we should mention No Man’s Sky, a mainstream Console and PC Game that shows a successful use of procedural generation.

Project DreamCatcher also uses a generative design approach that leverages a wide set of simulated solutions that respond to a set of predefined requirements that a material or shape should satisfy.

When it comes to Generative Art, it is important to ensure scarcity by restricting the creation of digital assets to limited editions, so an auto-generated item can be traded without the danger that an excess of supply triggers deflationary repercussions on its price. In Blockchain 2019 Part 2 we will describe techniques to register Non Fungible Tokens (NFT) on the blockchain in order to track each individual replica of an object while ensuring that there are no illegal ones.

Interesting approaches directly linked to Blockchain Technology have been launched recently such as the AutoGlyphs from LarvaLabs, although this remains an open area for further exploration. Remarkably successful is the case of Obvious Art where another application of the GAN approach resulted in a Generated Artwork being auctioned off for $432,500.

What prevents mass adoption of digital goods

Whereas it is sensible to forecast a significant expansion of the digital assets market in the coming years, it is also true that, at present, there are still several psychological barriers to overcome in order to get broader traction in the market.

The primary challenge relates to trust. A purchaser wants some guarantees that traded assets are genuine and that the seller owns them or acts on behalf of the owner. DLT provides a solid way to work out the history of a registered item without interrogating a centralised trusted entity. Provenance and ownership are inferable and verifiable from a number of replicated ledgers while block sequences can help ensure there is no double spending or double sale taking place within a certain time frame.

The second challenge is linked to the meaning of ownership outside of the context of a specific market. I would like to cite as an example the closure of Microsoft’s eBook store. Microsoft’s decision to pull out of the ebook market, presumably motivated by a lack of profit, could have an impact on all ebook purchases that were made on that platform. The perception of the customer was obviously that owning an ebook was the same as owning a physical book. What Microsoft might have contractually agreed through its End-User License Agreement (EULA), however, is that this is true only within the contextual existence of its platform.

This has also happened in video games where enthusiast players are perceiving the acquisition of a sword, or armour as if they were real objects. Even without the game closing down its online presence (e.g. when its maintenance costs become unsustainable), a lack of interest or reduced popularity might result in a digital item losing its value.

There is a push, in this sense, towards forms of ownership that can break out from the restrictions of a specific market and be maintained in a broader context. Blockchain’s DLT in conjunction with Smart Contracts, that exist potentially indefinitely, can be used to serve this purpose allowing people to effectively retain their digital items’ use across multiple applications. Whether those items will have a utility or value outside the context and platform in/on which they were originally created remains to be seen.

Even the acquisition of digital art requires a substantial paradigm shift. Compared to what happens with physical artefacts, there is not an equivalent tangible sense of taking home (or to one’s secure storage vault) a purchased object. This has been substituted by a verifiable trace on a distributed ledger that indicates to whom a registered digital object belongs.

Sensorial forms can also help in adapting to this new form of ownership. For instance, a digital work of art could be printed, a 3D model could be rendered for a VR or AR experience or 3D printed. In fact, to control what you can do with a digital item is per se a form of partial ownership, which can be traded. This is different from the concept of fractional ownership where your ownership comes in a general but diluted form. It is more a functional type of ownership. This is a concept which exists in relation to certain traditional, non-digital assets, often bounded by national laws and the physical form of those assets. For instance, I can own a classic Ferrari and allow someone else to race it; I can display it in my museum and charge an entry fee to visitors; but I will be restricted in how I am permitted to use the Ferrari name and badge attached to that vehicle.

The transition to these new notions of ownership is particularly demanding when it comes to digital non-fungible assets. Meanwhile, embracing fungible assets, such as a cryptocurrency, has been somewhat easier for customers who are already used to relating to financial instruments. This is probably because fungible assets serve the unique function of paying for something, while in the case of non-fungible assets there is a range of functions that define their meaning in the digital or physical space.


In this post we have discussed a major emerging innovation that blockchain technology has influenced dramatically over the last two years - the ownership of digital assets. In Blockchain 2019 - Part 2 we will expand on how the handling of assets gets automated via increasingly powerful Smart Contracts.

What we are witnessing is a new era that is likely to revolutionise the perception of ownership and reliance on trusted and trustless forms of automation. This is driven by the need to increase interoperability, cost compression, sustainability, performance (as in the speed at which events occur) and customisation, which are all aspects where traditional centralised fintech systems have not given a sufficient solution. It is worthwhile, however, to remind ourselves that the journey towards providing a response to these requirements, should not come at the expense of safety and security.

Privacy and sharing are also areas heavily debated. Owners of digital assets often prefer their identity to remain anonymous, while the benefit of socially shared information is widely recognised. An art collector, for instance, might not want to disclose his or her personal identity. Certainly, a lot more still remains to be explored as we are clearly just at the beginning of a wider journey that is going to reshape global digital and physical markets.

At Erlang Solutions we are collaborating with partners in researching innovative and performant services to support a wide range of clients. This ranges from building core blockchain technologies to more specific distributed applications supported by Smart Contracts. Part of this effort has been shared on our website where you can find some information on who we work with in the fintech world and some interesting case studies, others of which remain under the scope of NDAs.

This post intentionally aims at providing a state-of-the-art analysis. We soon expect to be in a position to release more specific and, possibly controversial, articles where a bolder vision will be illustrated. Get notifications when more content gets published - you know the drill, we need your contact details - but we are not spammers!

And don’t forget to follow us on Twitter!


A Little on Property-Based Testing with PropEr

Fred Hebert's latest book Property-Based Testing with PropEr, Erlang and Elixir is out in a print version from The Pragmatic Programmers.

If you are like me you've known you need to figure out property testing for a long time now and keep putting it off. Now that there is a book, with a free version even, it is the best time to get going.

The book even details avoiding a common pitfall that I certainly fall into anytime I've tried picking up  property based testing: attempting to shoehorn property testing in to any problem even when a regular unit test is a better fit.

As is made clear in the book, practice is important to leveling up with property based testing. So it is great that the book includes many exercises based on useful projects – not just sorting a list like the classic property test we've all seen and not been able to get beyond!

One thing I have to add to the content is running the properties through Common Test. You still want to be able to run them on their own with the PropEr plugin so you can fiddle with the options easily but I find it far better to also include the tests in a CT suite to keep all my tests in a single command and output in CI, with a single cover configuration and nice rendered junit based output in services like CircleCI.

An example of this method is found in this CT Suite for `opencensus-erlang`. But it basically looks like:

init_per_suite(Config) ->
    [{property_test_tool, proper} | Config].

prop_test(Config) ->
    ct_property_test:quickcheck(prop_base:prop_test(), Config).

The only issue is that ct_property_test wants to compile the tests and rebar3 already takes care of that. To resolve this I have a PR, so if you agree please give a thumbs up to my PR if it hasn't been merged yet when you are reading this :)

So do yourself a favor and at least checkout, or even better buy the book.


Copyright © 2016, Planet Erlang. No rights reserved.
Planet Erlang is maintained by Proctor.