Tips for success when working from home.

The need to adhere to social distancing has resulted in millions of workers around the world, setting up their very own home office for the first time. When done right, working remotely can lead to improved focus, efficiency and as a result, improved performance.

As a company, we have been well equipped to handle a remote workforce, with many of our staff members and colleagues already working in a distributed manner, and most of the work we carry out for clients being carried out remotely, often using distributed teams. Consequently, we have procedures, systems and staff who are already geared to delivering exceptional work while working remotely. On top of this, we will be implementing extra measures to ensure positive results for productivity and the mental health of our teams during these stressful times. Given our existing experience of delivering results while managing distributed teams and a remote workforce, we thought it was worthwhile to share some of the essential tips we recommend for sustainable long-term success.

Establish a clear and acknowledged way of communicating with your colleagues.

Understanding the shared expectations for communication saves time and avoids confusion. For many, this will result in a slight increase in the number of short, regular WIP updates.
This is extra important when working across different time zones.

Use video conferencing.

This is an important aspect of building trust and rapport. The more often you get to show your face to your colleagues, the more likely it is that they will feel a human connection to you and thus improving the sense of you belonging to a team for both parties.

Update your status on workplace chat.

Keeping an accurate status, including sharing what you are working on, helps your teammates know when to contact you.

Visualise your work and share it regularly.

Keeping a shared WIP document helps others to plan their dependencies around your work. It also avoids several people doing the same thing at once.

Timebox your work.

Working in short 15-25 minute sessions with short breaks in between helps you focus.

Regularly evaluate and learn about what works for you and your team.

Doing remote retrospectives together with colleagues will allow you to learn from each other and improve how you work remotely as a team.

Separate working space from private space.

Setting aside a specific place for work helps to keep a mental barrier between work and home life. This makes it easier to maintain a work-life balance. Ideally, find a workspace with access to natural light, minimal clutter and a comfortable chair for your back.

Separate working hours from non-working hours.

Make sure your work life doesn’t encroach on your home life and vice versa. Daily habits ensure long-term sustainable health and success.

Keep a daily routine.

Just as you would when working from an office, a daily routine helps you get in the right frame of mind for working.

Dress for work.

An extension of the point above is to continue to present as if you were still coming into the office. Not only will this help add to the mental separation between work time and personal time, but it will also help maintain trust between you, your clients and your colleagues.

Keep in touch with your colleagues.

Socialising with colleagues helps reduce the feeling of isolation that can come with remote work. Things like virtual lunch breaks can provide a good, reliable, regular place for people who feel like catching up.

Keep in touch with your manager.

Keeping needs and expectations aligned builds trust, improves efficiency and increases your visibility. This is essential for your personal success and development and leads to a better experience and result for our customers.

Have Fun!

Remember that the kitchen and water cooler catch-ups no longer happen incidentally. Make an effort to share funny stories, links, jokes and videos in designated channels for the social interaction of your team on your workplace chat, intranet or slack.

Stay safe. And, if you’d like to talk to us about how our remote team can help your system handle billions of concurrent users, get in touch.


Coronavirus: community support, and how you can help.

Let us know how we can help you.

The global outbreak of COVID-19 has impacted us all.

We hope that you are safe and extend our thoughts and best wishes to everyone who is directly affected by the crisis.

As these are unprecedented circumstances, we felt it was essential to work collaboratively with the community to ensure we are serving you effectively over the coming weeks.

In the interest of public health, physical meetups and conferences are being postponed and rescheduled. We pride ourselves on our connection to the community, and will be adapting to the current events to find alternative ways to share, inspire and connect users of the BEAM.

We are aware that in the current climate, the role of digital media is changing. Some of you are likely to be using it primarily to stay-up-to-date with the unfolding events, others may be using it to stay connected socially, while others will be minimising contact with social media to avoid COVID-19 anxiety.

In the interest of reaching you on the right channels, with information you need, we wanted your feedback. We will be doing all we can to support the community with the digital channels available to us, including free webinars, guides, virtual meetups and digital conferences.

We’d love your input to ensure this is productive, useful and empowering for the community. Please fill out this survey and tell us what content you’d like to see.

The impact of global health concerns on Code Sync events.

We have an incredible line-up of conferences for 2020, but due to the current global concerns around COVID-19, we have had to reconsider some of our plans.

We would love if you could fill out the survey mentioned above to help us understand your preferences for rescheduled events. As this is a fast-moving situation, we recommend you join our mailing list or visit the Code Sync website to stay up-to-date with the latest information.

Digital events.

Over the coming months, both Erlang Solutions and Code Sync will be working together to ensure that there is still a forum for communal knowledge sharing, where we can learn from and inspire other members of the BEAM community.

The Erlang Solutions monthly webinars will continue as usual. You can still register for our March webinar with the Massachusetts Bay Transportation Authority, which will be held on March 25th at 17:30GMT.

We will also be running virtual meetups. The first of these will take place on Wednesday April 2nd. To increase the social feel, this will be run on Google Hangouts. You can register to receive details and reminders here.

Finally, we are looking into a series of virtual conferences. The first of these will take place on April 3rd. Head to the Code Sync website for updates.

Stay tuned via our LinkedIn, Facebook, Twitter or newsletter for more details of events as they are announced.


How to start processes with dynamic names in Elixir

I’ve run into a couple situations recently where I’ve wanted to kick off a long-running process in response to some action (for example, an API call). A great strategy for this is to pair GenServers with a DynamicSupervisor.

For example, let’s say we have the following GenServer which we want to start up on demand. For simplicity, its only job is to print out a log statement and exit, but you can imagine more complicated business logic in here.

defmodule Greeter do
  use GenServer, restart: :transient

  def start_link(person),
    do: GenServer.start_link(__MODULE__, person, name: __MODULE__)

  def init(person),
    do: {:ok, person, {:continue, nil}}

  def handle_continue(nil, person) do
    IO.puts("👋 #{person}")
    {:stop, :normal, person}

If the restart: :transient part is unfamilar to you, it’s just how we say that it’s okay for this process to terminate when it’s done. The default value for this is :permanent, which says that the process should always be brought back up if it terminates. This is fine for a long-running process, but since the job of our GenServer is to perform some work and then exit, we need to modify it. You can read more about the restart option in the Supervisor docs.

Also, as total aside, if you’ve not seen handle_continue/2 before, you should check it out—it’s pretty cool!

To complete our little example, here’s a DynamicSupervisor we can use to start up our dynamic children:

defmodule GreeterSupervisor do
  use DynamicSupervisor

  def start_link(arg),
    do: DynamicSupervisor.start_link(__MODULE__, arg, name: __MODULE__)

  def init(_arg),
    do: DynamicSupervisor.init(strategy: :one_for_one)

  def greet(person),
    do: DynamicSupervisor.start_child(__MODULE__, {Greeter, person})

You may have noticed that both our GenServer and DynamicSupervisor give their names as __MODULE__. In the case of the supervisor, this is fine, since the task we’re trying to accomplish just requires a single supervisor. But since our goal is to start up many instances of our GenServer, we’ll need to give that a unique name. Let’s update our example to do this.

defmodule Greeter do
  # ...

  def start_link(person),
    do: GenServer.start_link(__MODULE__, person, name: process_name(person))

  defp process_name(person),
    do: String.to_atom("greeter_for_#{person}")

  # ...

Notice here that Elixir compels us to convert our process’ name into an atom. If we didn’t do this, we’d get an ArgumentError when starting up the child.

This dynamic creation of atoms is problematic, however, because Erlang—and by extension, Elixir—has a hard upper limit on the number of unique atoms an application can allocate. Once that limit is reached, the Erlang VM will crash. Moreover, atoms are never garbage collected, meaning that every new atom created will stick around for the entire lifetime of the application.

The code String.to_atom("greeter_for_#{person}") is tricky, then, because it allocates a new atom for each person we greet. The default size of the atom table is 1,048,576, meaning we could greet about a million unique people before our app crashed. That may sound like a lot, but because we’re sharing the atom table with all the other things our app is doing, we might hit the limit faster than you think.

Happily, Elixir provides us with a mechanism to handle this situation. It’s called a Registry. A registry is basically a way to map a process’ name to its underlying process ID (PID). (Registries have a couple other uses, too, but we won’t get into those here.)

The internal registry which Elixir normally uses to convert a process’ name into a PID intentionally requires that process names be atoms. This is because over the years Erlang has optimized atoms to be extremely fast when used as lookup keys. (One of these optimizations, storing atoms on the heap, is the reason behind the global atom limit to begin with.)

If we use the Registry module to run our own process name registry, however, we’re free to support any type of process name we like. This does come at a small efficiency cost, of course—what doesn’t?—but this is likely a very small part of the overall work performed by your GenServer, so it’s not worth worrying about too much.

Running your own registry is actually quite simple, and doesn’t even require defining a module. The only thing you have to do is add one line to the start/2 function of your Application module:

def start(_type, _args) do
  # ...

  children = [
    # ...
    {Registry, keys: :unique, name: GreeterRegistry}

  # ...

The keys: :unique part just says that we want process names to be unique, and the name is just some way for us to identify the registry. We might want to run other registries with different purposes down the line, so it’s a good practice to give registries names indicative of their use case.

We then have to instruct our GenServer to use our special new registry instead of the default one. For us this just means changing our process_name/1 function.

defmodule Greeter do
  # ...

  def start_link(person),
    do: GenServer.start_link(__MODULE__, person, name: process_name(person))

  defp process_name(person),
    do: {:via, Registry, {GreeterRegistry, "greeter_for_#{person}"}}

  # ...

Note how we’ve been able to remove the String.to_atom/1 call. Nice!

Dealing with this “via tuple” (as its called) is a little more cumbersome than a simple atom, but if you define something like our process_name/1 function, it’s not too bad. And, more importantly, your app won’t crash because it ran out of atoms, which is also pretty cool.

So that’s it! For the super curious, this answer on an Erlang mailing list goes a bit more in depth as to the design and performance considerations that went into giving an Erlang an atom limit to begin with. Thanks for reading along!


Which new companies are using Erlang and Elixir? #MyTopdogStatus

Last year we launched #MyTopDogStatus to share and celebrate some of the fantastic companies using Erlang and Elixir technology. Since then, the Erlang and Elixir community has continued to thrive. Last year, our friends at Code Sync had a wave of brand new people using Erlang & Elixir attending their events, there were representatives of 53 new companies added to the list! And that’s on top of many repeat attendees. The BEAM community now has at least 1,300 companies using Elixir in their tech stack and there are 3 new Elixir jobs advertised on average per week.

The benefits of BEAM technologies remain the same as they were last year, the concurrency model means higher uptime, so less lost sales or angry users when something goes wrong. Both programming languages result in a reduction of code, making it easier to manage, reducing the cost spent on development and making it easier to update. Code reductions also allow companies to reduce the cost and energy demands of their physical servers, which is a benefit to the bottom line, and the planet. Both languages support hot coding so you can have changes made in minutes, not months allowing for a more agile system. Regardless of what benefit is the core motivator, BEAM-based technologies of Erlang and Elixir are almost always the secret sauce in the tech stack of large household name brands who need to handle large volumes of users and data.

So with all those benefits to take advantage of, who were the new top dogs to join the pack and start reaping the benefits of Erlang and Elixir in the last year? Let’s find out.


The biggest of the new kids on the block is Samsung, a company which needs no introduction. With nearly 300 million mobile devices shipped and 2 billion dollars in revenue a year. Given the number of users, verticals and traffic Samsung handles, it’s no surprise that they are looking into incorporating BEAM technologies in their tech stack. This year, our very own Robert Virding, one of the co-creators of Erlang joined them in their Silicon Valley office to present a Meetup on the Erlang Ecosystem. Our team will be continuing to work with them on Erlang training and onboarding.

Cross River Bank

Cross River Bank is an innovative and ambitious take on banking. They combine the traditional features of a state-chartered bank with the flexibility and ambition of a FinTech. Their product features an extensive suite of services as a banking-as-a-platform offering, including lending, payments and risk management. Their smart work with technology partners allows them to focus on growing their business without compromising on technological innovation. As you can imagine, downtime in a sector dealing with large financial transactions needs to be minimised, so it’s no surprise to see BEAM-based technologies in their stack.


TubiTV is an entirely free alternative to video streaming services such as Netflix or Amazon Prime. It is America’s largest independently owned streaming services and operates on an ad revenue model as opposed to a user subscription model. There are over 15,000 videos available to stream at any one time. Dealing with that level of data is always going to be a good use case for a language like Elixir, you can learn more about some of their Elixir implementations over at the TubiTV Github page.


Handzap is an agile, mobile-first app designed to make the gig economy easy available in 32 languages and over 100 countries. People can post tasks, review applicants and chat using instant messaging, video messaging, and voice calls, all from within the app. This makes it easier and more convenient than ever to post, publish, manage and ensure satisfactory delivery for applicants and job posters alike. Due to its ambitious nature, Handzap had a number of unique requirements for its chat functionality, particularly around user privacy. The MongooseIM team at Erlang Solutions know that no two sets of users are the same, and are specialists in creating customised, reliable instant messaging solutions. We helped them build a future-proof messaging platform that integrated seamlessly with their application whilst making sure all their specific requirements were met.

Danske Bank

Danske Bank is Denmark’s largest bank. A Fortune 500 company with over 145 years of history. It has grown to be one of the biggest banks in Northern Europe and the world. BEAM-technologies are perfectly suited to the high-value transactions of the financial services industry, and RabbitMQ (built in Erlang) specifically is often used in technology stacks to handle microservices.


Digital advertising requires huge amounts of real-time decision making and needs to be able to handle significant spikes and loads. OpenX handles over 4.5 trillion data events per day, significantly more than 10 million per second. As we have seen with AdRoll in the past, Erlang’s concurrency is perfect for managing the high volumes of concurrent users required for the industry.


SnatchBot is a fantastic new platform that aims to make the potential of chatbots available to everyone, including non-technical audiences who can’t write a single line of code. The SnatchBot Builder uses building block templates to allow you to publish your bot to mobile devices, web apps and chat services like Facebook Messenger, Slack and Skype. As we mentioned with Handzap, the MongooseIM team are world-class experts at developing innovative and ambitious instant messaging solutions and were happy to help on such an exciting project.


Siemens is the largest industrial manufacturing company in Europe, with nearly half a million employees. They cover a diverse range of industries including Energy, Healthcare, Infrastructure and cities. If you look into any company of this size and scale, you’re likely to find a BEAM-based technology somewhere in their stack, and Siemens is no different.


We mentioned PepsiCo’s use of Elixir in their tech stack to power business-critical solutions in our article last year. This year, our team will continue to work with them to train, upskill and grow the use of Elixir in their tech stack.


SITA (Société Internationale de Télécommunications Aéronautiques) a large scale organisation that specialises in telecommunications and IT for the aviation industry. It is a member-owned organisation comprised of over 400 airlines major airlines ensuring the technology and communications of all flights are up to safety standards.


With all these great companies taking advantage of Erlang, Elixir, RabbitMQ and the BEAM, isn’t it time you looked into how you can get a more reliable, fault-tolerant and dynamic solution. Talk to us; we’re always happy to help.

You may also like:

Which companies are using Erlang?
Which companies are using Elixir?
How to build machine learning in Elixir.
How to manage IoT edge data with Erlang.


Sharing Protobuf schemas across services

The system that we’re building at is made of a few services (around fifteen at the time of writing) that interact with each other through a basic version of event sourcing. All events are exchanged (published and consumed) through RabbitMQ and are serialized with Protobuf. With several services already and many more coming in the future, managing the Protobuf schemas becomes a painful part of evolving and maintaining the system. Do we copy the schemas in all services? Do we keep them somewhere and use something akin to Git submodules to keep them in sync in all of our projects? What do we do?! In this post, I’ll go through the tooling that we came up with in order to sanely manage our Protobuf schemas throughout our services and technology stack.


Breaking Out of Ecto Schemas

Typically, when writing queries with Ecto, we use a module that uses an Ecto.Schema. By default, those queries return all fields defined in that schema. That makes a lot of sense when the intention is to retrieve a fully populated struct from the database.

But sometimes, we only want a subset of the fields defined in the schema. In fact, we may not even want to use a schema at all! For those cases, Ecto gives us the ability to drop one step closer to the raw power of SQL. Let’s take a look at an example.

Getting active users with comments

Suppose we want a report with a list of active users along with how many comments they have made. Instead of using an ecto schema for the query, we can specify the table directly. And instead of preloading all comments to count them in Elixir, we can use our good old SQL friends SELECT,COUNT, and GROUP BY:

query =
  from u in "users",
  join: c in "comments",
  on: c.user_id ==,
  where: == true,
  group_by: [,],
  select: [,, count(]


# => [
    ["Gandalf", "", 23],
    ["Aragorn", "", 45],
    ["Gimli", "", 566],

Great! By using select, we don’t fetch unnecessary data to populate the full %User{} and %Comment{} structs, and by using count, we avoid having to preload all user comments just to count them in Elixir.

But not all is sunshine and rainbows yet. The return data is a list of lists, which means we can’t pass around that data without also having to specify that the first element of each list is the name, the second is the email, and the third is the number of comments for that user. And what if we were to return four, five, or six columns? It gets out of hand very quickly.

Fortunately, Ecto has a better way!

Mapping your select

Ecto lets us define the structure in which to return the data. For this report we can define a descriptive map:

query =
  from u in "users",
  join: c in "comments",
  on: c.user_id ==,
  where: == true,
  group_by: [,],
  select: %{name:, email:, comments_count: count(}


# => [
    %{comments_count: 23, email: "", name: "Gandalf"},
    %{comments_count: 45, email: "", name: "Aragorn"},
    %{comments_count: 566, email: "", name: "Gimli"},

That’s so much better! We now have the best of both worlds — a lean query that gets only what we want in a structure we can pass around without having to provide additional context.

What next?

If you liked this, take a look at Programming Ecto. Even though I’ve been using Ecto for quite some time, I recently read it and learned quite a few things. I highly recommend it.



During my interview with Gene Kim on for Functional Geekery, Episode 128, Gene talked about how he had a problem he was asking different people for how they would solve it nicely with a functional approach, to see how to improve his Clojure solution to be more idiomatic.

His problem was on “rewriting” Ibid. entries in citation references, to get the authors names instead of the Ibid. value, as Ibid. is a shorthand that stands for “the authors listed in the entry before this”.

As he was describing this problem, I was picturing the general pseudo-code with a pattern match in my head. To be fair, this has come from a number of years of getting used to thinking in a functional style as well as thinking in a pattern matching style.

The following Erlang code is a close representation to the pseudo-code that was in my head.



ibid(Authors) ->
    ibid(Authors, []).

ibid([], UpdatedAuthors) ->
    {ok, lists:reverse(UpdatedAuthors)};
ibid(["Ibid." | _], []) ->
    {error, "No Previous Author for 'Ibid.' citation"};
ibid(["Ibid." | T], UpdatedAuthors=[H | _]) ->
    ibid(T, [H | UpdatedAuthors]);
ibid([H | T], UpdatedAuthors) ->
    ibid(T, [H | UpdatedAuthors]).

Running this in the Erlang shell using erl results in the following

> ibid:ibid(["Mike Nygard", "Gene Kim", "Ibid.", "Ibid.", "Nicole Forsgren", "Ibid.", "Jez Humble", "Gene Kim", "Ibid."]).
{ok,["Mike Nygard","Gene Kim","Gene Kim","Gene Kim",
     "Nicole Forsgren","Nicole Forsgren","Jez Humble","Gene Kim",
     "Gene Kim"]}
> ibid:ibid(["Ibid."]).
{error,"No Previous Author for 'Ibid.' citation"}

Throughout the editing of the podcast, I continued to think about his problem, and how I would approach it in Clojure without built-in pattern matching, and came up with the following using a cond instead of a pure pattern matching solution:

  ([authors] (update_ibids authors []))
  ([[citation_author & rest_authors :as original_authors] [last_author & _ :as new_authors]]
    (let [ibid? (fn [author] (= "Ibid." author))]
        (empty? original_authors) (reverse new_authors)
        (and (ibid? citation_author) (not last_author))
          (throw (Exception. "Found `Ibid.` with no previous author"))
        :else (recur
            (if (ibid? citation_author)

And if we run this in the Clojure REPL we get the following:

user=> (def references ["Gene Kim", "Jez Humble", "Ibid.", "Gene Kim", "Ibid.", "Ibid.", "Nicole Forsgren", "Micheal Nygard", "Ibid."])

user=> (update_ibids [])
user=> (update_ibids ["Ibid."])
Execution error at user/update-ibids (REPL:8).
Found `Ibid.` with no previous author
user=> (update_ibids references)
("Gene Kim" "Jez Humble" "Jez Humble" "Gene Kim" "Gene Kim" "Gene Kim" "Nicole Forsgren" "Micheal Nygard" "Micheal Nygard")

That solution didn’t sit well with me (and if there is a more idiomatic way to write it I would love some of your solutions as well), and because of that, I wanted to see what could be done using the core.match library, which moves towards the psuedo-code I was picturing.

(ns ibid
  (:require [clojure.core.match :refer [match]]))

  ([authors] (update_ibids authors []))
  ([orig updated]
    (match [orig updated]
      [[] new_authors] (reverse new_authors)
      [["Ibid." & _] []] (throw (Exception. "Found `Ibid.` with no previous author"))
      [["Ibid." & r] ([last_author & _] :seq) :as new_authors] (recur r (cons last_author new_authors))
      [[author & r] new_authors] (recur r (cons author new_authors)) )))

And if you are trying this yourself, don’t forget to add to your deps.edn file:

  {org.clojure/core.match {:mvn/version "0.3.0"}}

After the first couple of itches were scratched, Gene shared on Twitter Stephen Mcgill’s solution and his solution inspired by Stephen’s.

And then, just for fun (or “just for defun” if you prefer the pun intended version), I did a version in LFE (Lisp Flavored Erlang) due to it being a Lisp with built in pattern matching from being on the Erlang runtime.

(defmodule ibid
  (export (ibid 1)))

(defun ibid [authors]
  (ibid authors '[]))

(defun ibid
  ([[] updated]
    (tuple 'ok (: lists reverse updated)))
  (((cons "Ibid." _) '[])
    (tuple 'error "No Previous Author for 'Ibid.' citation"))
  ([(cons "Ibid." authors) (= (cons h _) updated)]
    (ibid authors (cons h updated)))
  ([(cons h rest) updated]
    (ibid rest (cons h updated))))

Which if we call it in LFE’s REPL gives us the following:

lfe> (: ibid ibid '["Mike Nygard" "Gene Kim" "Ibid." "Ibid." "Nicole Forsgren" "Ibid." "Jez Humble" "Gene Kim" "Ibid."])
  ("Mike Nygard"
   "Gene Kim"
   "Gene Kim"
   "Gene Kim"
   "Nicole Forsgren"
   "Nicole Forsgren"
   "Jez Humble"
   "Gene Kim"
   "Gene Kim"))
lfe> (: ibid ibid '["Ibid."])
#(error "No Previous Author for 'Ibid.' citation")

If you have different solutions shoot them my way as I would love to see them, and if there looks to be interest, and some responses, I can create a catalog of different solutions similar to what Eric Normand does on his weekly challenges with his Newsletter.


Copyright © 2016, Planet Erlang. No rights reserved.
Planet Erlang is maintained by Proctor.