Process Doesn’t Burn People Out, People Burn People Out

I can hear you grimacing from here. I get it, “process” as an amorphous concept carries a lot of baggage. Just the word “process” is enough to clear a room. And also my title is a pun. Not even a pun, a malapropism…the lazy man’s pun.

So let me clarify my thesis real quick: process is neither inherently good or evil. Whether it is used to benefit an organization depends entirely on the people who shape and interact with it.

What do I mean by process? In this context, simply the framework(s) a person, team, or organization uses to optimize results. That could be some kind of Scrum model with all the ceremonies involved and commonly tracked metrics therein, or something as loose as a two-person operation moving handwritten Post-it Notes across their living room wall as work progresses. It could be anything, honestly, the particulars won’t matter much for my purposes here.

What I’m most interested in exploring over a series of posts are the events, behaviors, or actions that — in my experience — will commonly lead to clashes between people and process. Because, regardless of an organization or team’s framework, some problems will come up. Maybe not even “problems,” but inefficiencies, whether immediately or over time, that will have a noticeable impact on project results.

I thought about this list a looooong while. Ultimately, the categories were hard to nail down, because so many things impact a team, process, or organization. This is just my best estimate of the broadest and most common: I suspect there are plenty more to be explored. Maybe this series will go on forever! (It won’t).

But with that, let’s dive in.

Chapter 1: Apathy

First up: laziness with regard to shared practices or processes. This one’s pretty self-explanatory, but still important to recognize and address. Even one person on a core team being complacent about process can have impacts far beyond that team member just requiring more attention from Project Management.

For starters, someone ignoring or being lazy with process is unfairly placing their share of the process burden on the rest of the team. They might not even realize that their actions are forcing others to pick up the slack. Whether that’s the project manager, a software developer, or the rest of the team, someone is going to be doing extra work. And that’s assuming the rest of the team is healthy and motivated!

Worse: this behavior sends a signal that it’s ok to ignore process when you don’t like it. That is not a message that builds effective teams. Problems or concerns with process should be put in the spotlight as quickly as possible. It can be uncomfortable delivering what might be perceived as critical feedback, but simply ignoring annoyances covers up valuable conversation that could benefit the entire team.

Further, if someone is apathetic about following the practices your team has established, chances are pretty good that’s a symptom of a larger issue. Maybe even one of the issues I’ll be discussing in future posts in this series.

But enough about that, the point is made: Apathy = bad. So what do you do about it?

The good news is that hiding inside every problem is an opportunity for growth, learning, evolution. Dare I say…iterating.

A good question to start with in overcoming apathy is, how many people feel this way? If it’s unique to an individual, simply talk with them to understand the issue. More often than not, pain points within process are completely solvable, but not if you don’t take the time to understand what the problems are.

And, honestly, it’s completely possible that someone being complacent about process doesn’t even realize it. Have a lot of changes in direction or resourcing happened on your team lately? Has process ever been collectively discussed and explained? Is the person in question divided among multiple efforts with different process intricacies? And so on. There are many perfectly valid reasons why a person might not be aware they’ve fallen behind on following the same processes as the rest of the team.

On that note, consider setting up a retrospective with the entire team, especially if more than one individual is getting process malaise. If you’re already doing them, use your next one to focus on this issue specifically (or just set up an ad hoc meeting for this purpose).

Whether one or many people are operating outside the lines of your collective process, it’s valuable for the whole team to hear: a) that it’s happening; b) why it’s happening; c) what’s going to be done to improve it.

Finally: regardless of the root cause(s) or the action plan you and the team put in place, please - make sure to follow up. Otherwise, you will have become another example for this post.

That’s it for this installment! Tune in next time for a riveting read on Big Brother; aka Metrics, A Cautionary Tale.

DockYard is a digital product agency offering exceptional user experience, design, full stack engineering, web app development, software, Ember, Elixir, and Phoenix services, consulting, and training.

Permalink

MongooseIM 3.1 - Inbox got better, testing got easier

This summer the MongooseIM team have not had a second to be lazy - the development of MongooseIM 3.1 release took over what was supposed to be our downtime, when all major activities driven throughout the year slow down. We’re happy to say that in this time of leisure, the new release is packed with important features, improvements and excitement. Take a look at the version highlights and be part of this major step in creating a world-class communication server.

This time, a large part of the changes are focused on development efficiency, but rest assured, we’ve added new items you are going to appreciate.

In the “Test Runner” section, you get to learn all about a little tool we provided so that you can easily set the development environment up and run tests locally without Travis CI.

Our Inbox extension has got three big updates that push the boundaries of what’s currently offered in XMPP world. As a rule, we want to lead by example; less talk and more action. That is why working in cooperation with Forward, we decided to put forward an implementation of a popular use case as a unique feature in this software sector.

This release is also an important lesson for us. A lesson about edge cases and concurrency in integration testing. You don’t necessarily have to be an Erlang developer to benefit from the third section, but reading it allows you to learn with us.

The “Honorable Mentions” section may seem minor, but for some projects the items listed there can indeed make a difference! It’s a candy mix of different changes, so read carefully not to miss your favourite flavours!

Obviously, a single blog post is too small a space to tell a profound story about every new item in the changelog, so we encourage you to check it out. You can find a link at the bottom of this blog post.

Test Runner

The Travis CI is our main verification tool for the Pull Requests we create and merge. Whilst being convenient and critical for our process, it is not very handy for day-to-day development. It is very common to frequently execute limited subset of tests to ensure that a new piece of code we wrote works perfectly. However, waiting for Travis results every time would extend implementation time excessively as it executes multiple presets, while being a shared resource at the same time.

The test runner is a script that helps to set the development environment up, and run tests on a developer machine locally. The test runner shares a lot of code with our Travis CI build scripts, which ensures that test results are consistent between local and CI runs.

The test runner allows to choose which tests to run. It can be useful, when one of the tests is failing and we want to rerun it.

Since MongooseIM supports different database backends, the test runner is able to set a selected database up for you and configure MongooseIM. Simply put, it prepares the local environment before executing actual test suites.

The test runner supports a lot of options to customise your build and test execution scenario. It also supports shell autocompletion for option names and values.

We’ve prepared a recording for you that presents a basic usage of the script.

.

Please note that Asciinema allows you to pause and copy any text fragment you like, so it would be very easy for you to repeat the same steps.

New Inbox Features

Originally sponsored by and created for Forward, Inbox has been available as an open source extension for the last two months already. In MongooseIM 3.1, it returns with a bundle of fresh goodies. The first feature is MSSQL support.

Despite being less frequently used with MongooseIM compared to MySQL or PostgreSQL, it’s still an important piece of the architecture for many projects, especially those running in Azure cloud. We don’t want you to feel excluded, dear Microsoft users!

The second one is the support for classic MUC group chats. MUC Light was never intended as a complete replacement for original XEP-0045 solution. It means that numerous projects exists where mod_muc is still a better match than its younger sibling, and they may now benefit from inbox features as well!

Last but not least is the timestamp support. First of all, they are stored in DB and returned in Inbox query results. For those using mod_inbox from MIM 3.0: you’ll need to update your schemas but don’t worry - it isn’t very complicated. What’s more, a client may now request conversations from a certain time period and sort them by timestamp, both ascending and descending.

This is not our final word on this matter. You may expect further improvements to this extension in upcoming MongooseIM versions!

We’ve prepared a demo of the Inbox feature. It shows both the backed and the frontend side of it. The application used in the demo has been designed by Forward.

.

Lessons learnt regarding CI

OK, these are short and sweet but nevertheless important:

  1. Avoid RPC in integration tests. They tend to time out in slow CI environments (such as Travis).
  2. When test users exchange messages, always wait until they are received to ensure proper synchronisation.
  3. On a slow machine, MSSQL SELECT query may return more than one row (even when retrieving by the exact primary key value) as a consequence of the transaction deadlock.
  4. When you can’t use any other means of server state synchronisation, don’t use hardcoded sleep periods; replace them with an incremental backoff and verification in a loop. Sometimes you can’t predict whether a server state is updated properly in 500ms, 1000ms or 3000ms. Adding 5s waits everywhere may cause test suites to run veeery long.
  5. Be careful about leaking presences between cases. This applies to XMPP testing. Best practice is to generate fresh user accounts for every scenario.
  6. Some databases don’t support transactions so the new data may not be instantly available. For example, in the case of Riak (its Search function in particular) a delay between data insert and query is required.
  7. Sometimes creating a schema in a DB may fail for the first time due to timing issues, so implement a retry loop in a DB provisioning scripts. This also applies to Riak.
  8. Did I mention creating new user accounts for every test case? It actually applies not only to XMPP. With this practice, you won’t have to worry about possible leftovers of a user’s state.

Honorable mentions

ElasticSearch backend for Message Archive

Almost every MongooseIM module supports more than one type of backend. Usually it’s Mnesia, RDBMS and sometimes Riak. Message Archive Management is a noteworthy exception, as we’ve implemented RDBMS, Riak and Cassandra support for this module. Or “modules” actually, as it consists of over 30 Erlang files already.

It is a very special feature as it processes a vast amount of data and sometimes executes expensive queries. In order to ensure performance and match a project’s architecture, wide range of supported DB backends is essential.

It is our pleasure to announce that yet another backend has joined the family: ElasticSearch.

OTP 21 support

OTP 21.0 has been released ~1 month ago and we’ve added support for this version less than a week after! This is great news for all projects sticking to the most recent Erlang technology as pioneers in BEAM world. The new platform version brought not only improvements in regards to performance but also some incompatibilities that we’ve resolved, so MongooseIM still remains at a technological peak.

As a tradeoff, we’ve dropped official support for OTP 18.x. It should still be possible to compile 3.1 with this version with some minor code modifications, but we’re definitely moving forward. It has allowed us to get rid of non-typed maps specifications as an example. As a reminder, bear in mind, that MongooseIM always supports two most recent, stable OTP branches (currently these are 20.x and 19.x and one being under an active development, 21.x).

Jingle/SIP tutorial

SIP is a common choice for VoIP applications but certain XMPP features may be a very good match for such software. MongooseIM is able to liaise between these two worlds and now it’s easier than ever with significantly extended documentation (compared to the level in 3.0) and a tutorial on mod_jingle_sip usage.

Worker pool unification

Every developer writes a custom worker pool at some point of their career. Everyone. Certain MongooseIM components (the ones that use connection pools) were created with different preferred library in mind. As a result, we’ve ended up with many kinds of worker pools: cuesport, worker_pool and poolboy. It wasn’t only a matter of maintenance difficulty, but performance as well. As an example, cuesport supports only a simple round-robin job assignment algorithm, which is not optimal in every case. It also lacks inspection of any kind.

Given our experience gathered over the years, we’ve selected worker_pool as our primary library. It is very flexible and exposes a dedicated stats API. It was originally created by Inaka and it is actively maintained by its former developers at this present time.

For now, the changes are purely internal. Some projects may observe better performance but the primary goal was to prepare for a second round of unification. Stay tuned for more details in near future.

Changelog

Please feel free to read the detailed changelog. Here, you can find a full list of source code changes and useful links.

Contributors Special thanks to our contributors: @SamuelNichols @Beisenbek @GalaxyGorilla!

Test our work on MongooseIM 3.1 and share your feedback

Help us improve the MongooseIM platform:

  1. Star our repo: esl/MongooseIM
  2. Report issues: esl/MongooseIM/issues
  3. Share your thoughts via Twitter: @MongooseIM
  4. Download Docker image with new release
  5. Sign up to our dedicated mailing list to stay up to date about MongooseIM, messaging innovations and industry news.
  6. Check out our MongooseIM product page for more information on the MongooseIM platform.

Permalink

Digging deeper in SSA

This blog post continues the exploration of the new SSA-based intermediate representation through multiple examples. Make sure to read the Introduction to SSA if you missed it.

Calling a BIF that may fail

The first example calls a guard BIF that may fail with an exception:

element_body(T) ->
    element(2, T).

The (optimized) SSA code looks like this:

function blog:element_body(_0) {
0:
  %% blog.erl:5
  _1 = bif:element literal 2, _0
  @ssa_bool = succeeded _1
  br @ssa_bool, label 3, label 1

3:
  ret _1

1:
  @ssa_ret = call remote (literal erlang):(literal error)/1, literal badarg
  ret @ssa_ret
}

Let’s go through the code a few lines at a time:

  %% blog.erl:5
  _1 = bif:element literal 2, _0
  @ssa_bool = succeeded _1

The bif:element instruction calls the guard BIF element/2, assigning the value to the variable _1 if the call is successful.

What if the call is not successful?

The succeeded _1 instruction tests whether the previous instruction assigning to _1 was successful. true will be assigned to @ssa_bool if the second element of the tuple was successfully fetched from the tuple, and false will be assigned otherwise.

  br @ssa_bool, label 3, label 1

The br instruction tests whether @ssa_bool is true. If true, execution continues at block 3, which returns the value of the second element from the tuple. If false, execution continues at block 1.

It was mentioned in the previous blog post that block 1 is a special block that the SSA code generator always emits. In the previous examples, it was never referenced and therefore removed by one of the optimization passes.

In this example, it is used as the target when the call to element/2 fails.

The BEAM code generator treats references to block 1 specially. Here follows the BEAM code for the function. As usual, I have omitted the function header.

  %% Block 0.
  {line,[{location,"blog.erl",5}]}.
  {bif,element,{f,0},[{integer,2},{x,0}],{x,0}}.
  return.

Note that no code has been generated for block 1.

The line instructions gives the file name and line number of the source file. It will be used in the stack backtrace if the following instruction fails.

The bif instruction calls the given guard BIF, element/2 in this case. The {f,0} operand gives the action to take if the element/2 fails. The number 0 is a a special case, meaning that a badarg exception should be raised if the call of element/2 fails.

A failing BIF call in a guard

In the next example, element/2 is called in a guard:

element_guard(T) when element(2, T) =:= true ->
    ok;
element_guard(_) ->
    error.

The SSA code looks like this:

function blog:element_guard(_0) {
0:
  %% blog.erl:7
  _1 = bif:element literal 2, _0
  @ssa_bool = succeeded _1
  br @ssa_bool, label 4, label 3

4:
  @ssa_bool:5 = bif:'=:=' _1, literal true
  br @ssa_bool:5, label 6, label 3

6:
  ret literal ok

3:
  ret literal error
}

The first two instructions in block 0 are the same as in the previous example. The br instruction has different labels, though. The failure label refers to block 3, which returns the value error. The success label continues execution at block 4.

4:
  @ssa_bool:5 = bif:'=:=' _1, literal true
  br @ssa_bool:5, label 6, label 3

Block 4 is the translation of =:= true part of the Erlang code. If the second element in the tuple is equal to true, execution continues at block 6, which returns the value ok. Otherwise execution continues at block 3, which returns the value error.

Here is the BEAM code:

  {bif,element,{f,5},[{integer,2},{x,0}],{x,0}}.
  {test,is_eq_exact,{f,5},[{x,0},{atom,true}]}.
  {move,{atom,ok},{x,0}}.
  return.
{label,5}.
  {move,{atom,error},{x,0}}.
  return.

In the bif instruction, {f,5} means that execution should continue at label 5 if the element/2 call fails. Otherwise execution will continue at the next instruction.

Our first case

Here is the next example:

case1(X) ->
    case X of
        1 -> a;
        2 -> b;
        _ -> c
    end.

Translated to SSA code:

function blog:case1(_0) {
0:
  switch _0, label 3, [ { literal 2, label 5 }, { literal 1, label 4 } ]

4:
  ret literal a

5:
  ret literal b

3:
  ret literal c
}

The switch instruction is a multi-way branch to one of any number of other blocks, based on the value of a variable. In this example, it branches based on the value of the variable _0. If _0 is equal to 2, execution continues at block 5. If _0 is equal to 1, execution continues at block 4. If the value is not equal to any of the values in the switch list, execution continues at the block referred to by the failure label, in this example block 3.

The BEAM code looks like this:

  {select_val,{x,0},{f,10},{list,[{integer,2},{f,9},{integer,1},{f,8}]}}.
{label,8}.
  {move,{atom,a},{x,0}}.
  return.
{label,9}.
  {move,{atom,b},{x,0}}.
  return.
{label,10}.
  {move,{atom,c},{x,0}}.
  return.

Terminators

As mentioned in the previous blog post, the last instruction in a block is called a terminator. A terminator either returns from the function or transfers control to another block. With the introduction of switch, the terminator story is complete. To summarize, a block can end in one of the following terminators:

  • ret to return a value from the function.

  • br to either branch to another block (one-way branch), or branch to one of two possible other blocks based on a variable (two-way branch).

  • switch to branch to one of any number of other blocks.

Another case

Here is a slightly different example:

case2(X) ->
    case X of
        1 -> a;
        2 -> b;
        3 -> c
    end.

In this case, X must be one of the integers 1, 2, or 3. Otherwise, there will be a {case_clause,X} exception. Here is the SSA code:

function blog:case2(_0) {
0:
  switch _0, label 3, [ { literal 3, label 6 }, { literal 2, label 5 }, { literal 1, label 4 } ]

4:
  ret literal a

5:
  ret literal b

6:
  ret literal c

3:
  _2 = put_tuple literal case_clause, _0

  %% blog.erl:20
  @ssa_ret:7 = call remote (literal erlang):(literal error)/1, _2
  ret @ssa_ret:7
}

The failure label for the switch is 3. Block 3 builds the {case_clause,X} tuple and calls erlang:error/1.

Here is the BEAM code:

  {select_val,{x,0},
              {f,16},
              {list,[{integer,3},
                     {f,15},
                     {integer,2},
                     {f,14},
                     {integer,1},
                     {f,13}]}}.
{label,13}.
  {move,{atom,a},{x,0}}.
  return.
{label,14}.
  {move,{atom,b},{x,0}}.
  return.
{label,15}.
  {move,{atom,c},{x,0}}.
  return.
{label,16}.
  {line,[{location,"blog.erl",20}]}.
  {case_end,{x,0}}.

The case_end instruction is an optimization to save space. It is shorter than the equivalent:

  {test_heap,3,1}.
  {put_tuple2,{x,0},{list,[{atom,case_clause},{x,0}]}}.
  {line,[{location,"blog.erl",20}]}.
  {call_ext_only,1,{extfunc,erlang,error,1}}.

(The put_tuple2 instruction was introduced in #1947: Introduce a put_tuple2 instruction, which was recently merged to master.)

Our final case

It’s time to address the kind of case similar to what was teased at the end of the previous blog post.

In this example, the variable Y will be assigned different values in each clause of the case:

case3a(X) ->
    case X of
        zero ->
            Y = 0;
        something ->
            Y = X;
        _ ->
            Y = no_idea
    end,
    {ok,Y}.

Perhaps a more common way to write this case would be:

case3b(X) ->
    Y = case X of
            zero -> 0;
            something -> X;
            _ -> no_idea
        end,
    {ok,Y}.

In either case, the problem remains. Static Single Assignment means that each variable can only be given a value once. So how can this example be translated to SSA code?

Here follows the SSA code for case3a/1. The SSA code for case3b/1 is almost identical except for variable naming.

function blog:case3a(_0) {
0:
  switch _0, label 4, [ { literal something, label 6 }, { literal zero, label 5 } ]

5:
  br label 3

6:
  br label 3

4:
  br label 3

3:
  Y = phi { literal no_idea, 4 }, { literal 0, 5 }, { _0, 6 }
  _7 = put_tuple literal ok, Y
  ret _7
}

Let’s jump right to the interesting (and confusing) part of the code:

3:
  Y = phi { literal no_idea, 4 }, { literal 0, 5 }, { _0, 6 }

Clearly, Y is only given a value once, so the SSA property is preserved.

That’s good, but exactly what is the value that is being assigned?

The name of the instruction is phi, which is the name of the Greek letter φ. Having an unusual name, the instruction deserves to have unusual operands, too. Each operand is a pair, the first element in the pair being a value and the second element a block number of a predecessor block. The value of the phi node will be one of the values from one the pairs. But from which pair? That depends on the number of the previous block that branched to the phi instruction.

To make that somewhat clearer, let’s look at all operands:

  • { literal no_idea, 4 }: If the number of block that executed br label 3 was 4, the value of the phi instruction will be the value in this pair, that is, the atom no_idea. The failure label for the switch instruction is 4, so this pair will be chosen when _0 does not match any of the values in the switch list.

  • { literal 0, 5 }: If the number of block that executed br label 3 was 5, the value of the phi instruction will be the integer 0. The switch instruction will transfer control to block 5 if the value of _0 is the atom zero.

  • { _0, 6 }: Finally, if _0 is the atom something, the switch will transfer control to block 6, which will transfer control to block 3. The value of the phi instruction will be the value of the variable _0.

The concept of phi instructions probably feels a bit strange at first sight (and at second sight), and one might also think they must be terribly inefficient.

Leaving the strangeness aside, let’s talk about the efficiency. phi instructions is a fiction convenient for representing and optimizing the code. When translating to BEAM code, the phi instructions are eliminated.

Here follows an example that is not SSA code, because it assigns the variable Y three times, but gives an idea how the phi instruction is eliminated:

%% Not SSA code!
function blog:case3a(_0) {
0:
  switch _0, label 4, [ { literal something, label 6 }, { literal zero, label 5 } ]

5:
  Y := literal 0
  br label 3

6:
  Y := _0
  br label 3

4:
  Y := no_idea
  br label 3

3:
  _7 = put_tuple literal ok, Y
  ret _7
}

The BEAM code generator (beam_ssa_codegen) does a similar rewrite during code generation.

Here is the unoptimized BEAM code, slightly edited for clarity:

%% Block 0.
{select_val,{x,0},
            {f,53},
            {list,[{atom,something},{f,55},{atom,zero},{f,57}]}}.

%% Block 5.
{label,57}.
  {move,{integer,0},{x,0}}.
  {jump,{f,59}}.

%% Block 6.
{label,55}.
  %% The result is already in {x,0}.
  {jump,{f,59}}.

%% Block 4.
{label,53}.
  {move,{atom,no_idea},{x,0}}.
  {jump,{f,59}}.

%% Block 3.
{label,59}.
   {test_heap,3,1}.
   {put_tuple2,{x,0},{list,[{atom,ok},{x,0}]}}.
   return.

Here is the final BEAM code after some more optimizations:

{label,18}.
  {select_val,{x,0},
              {f,20},
              {list,[{atom,something},{f,21},{atom,zero},{f,19}]}}.
{label,19}.
  {move,{integer,0},{x,0}}.
  {jump,{f,21}}.
{label,20}.
  {move,{atom,no_idea},{x,0}}.
{label,21}.
  {test_heap,3,1}.
  {put_tuple2,{x,0},{list,[{atom,ok},{x,0}]}}.
  return.

The cold case

Here is the example from the end of the previous blog post:

bar(X) ->
    case X of
        none ->
            Y = 0;
        _ ->
            Y = X
    end,
    Y + 1.

And here is the SSA code:

function blog:bar(_0) {
0:
  @ssa_bool = bif:'=:=' _0, literal none
  br @ssa_bool, label 5, label 4

5:
  br label 3

4:
  br label 3

3:
  Y = phi { _0, 4 }, { literal 0, 5 }

  %% blog.erl:52
  _6 = bif:'+' Y, literal 1
  @ssa_bool:6 = succeeded _6
  br @ssa_bool:6, label 7, label 1

7:
  ret _6

1:
  @ssa_ret = call remote (literal erlang):(literal error)/1, literal badarg
  ret @ssa_ret
}

It is left as an exercise to the reader to read and understand the code.

Here is the BEAM code:

{label,28}.
  {test,is_eq_exact,{f,29},[{x,0},{atom,none}]}.
  {move,{integer,0},{x,0}}.
{label,29}.
  {line,[{location,"blog.erl",52}]}.
  {gc_bif,'+',{f,0},1,[{x,0},{integer,1}],{x,0}}.
  return.

The gc_bif instruction calls a guard BIF that might need to do a garbage collection. Since integers can be of essentially unlimited size in Erlang, the result of + might not fit in a word. The 1 following {f,0} is the number of registers that must be preserved; in this case, only {x,0}.

Permalink

Montículos con Elixir

Hace 7 años escribí una entrada mientras estudiaba los montículos para la asignatura de Programación y Estructuras de Datos Avanzadas de la UNED. Al revisar la entrada y ver que aún hay mucha gente consultando he decidido retomarla y actualizarla pero además he creado esta entrada para ver cómo se haría en Elixir, ¿te animas a ver los montículos en Elixir?

Permalink

Debugging Elixir With Three Tools: ElixirConf 2018

Luke Imhoff, an engineer with DockYard who developed the Elixir plugin for JetBrain’s IntelliJ platform, provides his findings for debugging Elixir code using three different classes of tools: IO, Pry, and line-based graphical tools. [IntelliJ 9.0.0] (https://github.com/KronicDeth/intellij-elixir/releases/tag/v9.0.0) was released in early September.

Listen to Luke explain how to run debug functions in his ElxirConf 2018 presentation, and follow up with his lightning talk on resources for BEAM Internals.

DockYard is a digital product agency offering exceptional user experience, design, full stack engineering, web app development, software, Ember, Elixir, and Phoenix services, consulting, and training.

Permalink

Rebar3: Building Docker Images

How I cut the time it takes to build an Erlang docker image in half.

While adding support for a rebar3 option to only build dependencies and not project applications, rebar3 compile --deps_only, I realized it might already actually work for the Docker use case I had in mind.

So I gave it a try and, yup, turns out it does!

The idea is to cache a docker layer of the built dependencies to reuse if they haven't changed. This is done by copying only the config files, which if changed will then invalidate the next Dockerfile commands, and build only the dependencies in the next layer. After that you copy in the source of your project as usual and build the release.

# build and cache dependencies
COPY rebar.config rebar.lock /usr/src/app/
RUN rebar3 compile

# copy in the vonnegut source and build release
COPY . /usr/src/app
RUN rebar3 as prod tar

No new functionality needed, just plain rebar3 compile does what we need when it is given a directory with only the config and/or lock file. After copying in the rest of the project rebar3 as prod tar will compile the project's source (the `tar` provider depends compile, so we don't need to call compile directly) and build a release tarball we can copy to a new image in the next stage.

A quick test with the vonnegut Dockerfile I found the build times dropped from ~30 seconds to ~14 seconds. The initial build with only the Erlang image cached, so the packages and rebar3 still have to be installed, is around 50 seconds.

Note that this also works when only copying rebar.lock. Only copying the lock file may be beneficial to some who don't care about caching plugins and prefer to be able to change the rebar config without it invalidating the cache.

Also, we will still be merging in the --deps_only option as it likely has uses outside of building Docker images.

Permalink

LiveView Project at ElixirConf 2018

Phoenix developer Chris McCord announced LiveView, a new project for creating rich user experiences on web applications, in his keynote presentation at ElixirConf 2018.

Organizations increasingly are turning to enhanced web app design to improve their customers’ digital experiences. Building those web apps means relying on high-performance technology. LiveView, built on standard Phoenix, meets that performance.

Users want to interact with applications, Chris notes in his presentation. It’s critical that the app platform makes interaction seamless and effortless.

Listen to Chris break down LiveView and watch his whole presentation to learn more.

DockYard is a digital product agency offering exceptional user experience, design, full stack engineering, web app development, software, Ember, Elixir, and Phoenix services, consulting, and training.

Permalink

Copyright © 2016, Planet Erlang. No rights reserved.
Planet Erlang is maintained by Proctor.