Why I Stayed With Elixir

Last weekend I attended the wroc_love.rb conference in the beautiful city of Wrocław. I got a chance to be a member of the “Ruby vs Elixir” panel session and spent hours discussing related topics during the afterparties.

I love Ruby, I’m a “Ruby native” - it was the first language I really learned, and the first one I used to make money as a programmer. But for the time being, I decided to stick with Elixir. In this article I share some of my reasoning that I expressed during the panel, the discussions, and some other thoughts that I had since. I’ll also include links to other resources that explain the points I’m making probably better and to a greater extent.

Different use case

First thing, I’d like to state is that, I don’t think you have to choose one of Elixir or Ruby and completely condemn the other. They are both useful languages, and while there’s a significant overlap in the use cases, I’m convinced there’s a place for both of them.

Talking with people during various conferences and on the Elixir’s Slack channel or IRC, I see a lot of people leveraging the power of Elixir in new and exciting use cases, far beyond the regular web development - something that remains mostly inaccessible for the mainstream Ruby community. And that’s probably the reason more and more people are looking somewhere else, to languages like Elixir or Go. Because more and more applications are not “just” business web apps, more and more applications require processing a lot of data, providing some level of real-time features and require massive parallelism. Additionally, projects like Nerves open completely new areas to developers - I’m pretty sure we’re going to see more of those in the coming years.

Performance & scalability

Whenever Elixir and Ruby are mentioned in the same sentence, the next one almost certainly includes performance claims and comparisons. That’s true - in many cases, Elixir is widely faster (within one or even several orders of magnitude), but I don’t think it’s that important.

It’s an easy thing to include in a blog post, or mention in 140 characters of a tweet. Everybody loves a good benchmark. But performance is why people come to Elixir, not why they stay. It’s just the icing on the cake.

At this point I also need to mention the Elixir - different kind of promises article by Hubert Łępicki, where he explains his view on how Elixir and Ruby differ and why it might be a good idea to stay with Elixir.

Fault tolerance and isolation

The biggest advantage of the entire BEAM platform is the isolation and fault tolerance guarantees it provides. It’s the only technology, that I know of, that truly allows separating different parts of the system, so that they don’t influence one another. This happens on, at least two, levels:

  • limited error propagation - the supervision system allows for isolating failure and collectively managing the unexpected failure scenarios without excessive code. A bug in one part of the system does not affect other parts;
  • progress guarantee - the preemptive scheduling of processes is a true life changer. With it, we never have to worry, we’re going to “block” our thread pool (which is a major concern in other actor system implementations), we never have to be concerned about starving some part of the system. This is where the soft-realtime guarantees of the platform come from.

To learn more about the fault tolerance and isolation of the Erlang ecosystem, I recommend reading the excellent The Zen of Erlang essay by Fred Hebert.

Developer happiness

Ruby and Rails are praised for the “developer happiness”. And that’s true - starting an application with Rails is a breeze, allows you to deliver features fast and solve problems quickly. It’s extremely enjoyable and satisfying to see a complex system arise from nothing in such a short time. Unfortunately, I feel like the whole regard for developer happiness goes completely out the window, when we start considering longer projects. The defaults promoted by the language and framework, make it easy to create code that is hard to maintain and makes you want to cry - and those are not tears of happiness. It’s true you can use your own patterns that make this a non-issue, but for a majority of projects, the sad truth is, the defaults are what is used.

The “developer happiness” is equally a goal for the Elixir community. But the notion is regarded much more widely - not just in the initial phase of the project, but also in the long term. Various libraries are willing to sacrifice some of the initial “velocity” for the long-term maintainability. The changes introduced in Ecto 2.0 (focused primarily on the removal of the features that are known to create maintenance burdens - e.g. callbacks), and recent changes in code organisation philosophy from Phoenix 1.3, are all a testament to that.

One thing about the Phoenix 1.3 changes to note, is that those changes are not sudden or unexpected - various discussions about the structure of a Phoenix application were happening in the blog articles, forum posts, and conference talks for at least a year. The leaders of the community are not afraid of listening to others, admitting to errors when necessary, and correcting them.


A common worry for new platforms is the lack of tooling. But Elixir is not a new platform - the tooling for the Erlang ecosystem was constructed over the last 20 years into an impressive set of debugging and diagnostic facilities. The most profound - tracing, sometimes seems like a true superpower. The language and the platform cooperate nicely providing basic low-level facilities, which make it easy to build extremely useful tools on top of them - Recon, Redbug and observer just to name a few. And there’s more coming (!) with projects like wobserver and Erlang Performance Lab putting the bar even higher.

That said, on the front of editor tooling we have a long way to go to match other languages. Hopefully, we’ll get a student to implement the Language Server Protocol for Elixir during the Google Summer of Code, which has the potential to improve the support quickly across various editors (if you’re a student, please consider applying for GSoC!).

It’s all about simplicity

If I were to pick one thing about Elixir, that makes me like it so much, it would be simplicity. In the excellent talk Simple Made Easy, Rich Hickey - the creator of Clojure, defines the difference between those two, at first glance similar terms. In short, something is easy when it is short, concise and familiar, but something is simple when it is not complex when it is easy to understand and decompose.

I feel like many libraries and solutions in the Ruby community, overly focus on making it easy - “just include this one line and you can launch rockets to the moon” is such a common claim in READMEs of many gems. The mainstream Ruby community focuses on building solutions to problems and only then trying to make them reusable in some way.

On the other hand, most of the core Elixir libraries - Plug, Phoenix, Ecto, and countless other ones are primarily tools for building solutions. They are more low-level, which means you need to write some of the code yourself, but it’s much easier to switch parts or customise their behaviour. They allow you to spend time solving your problem, instead of trying to make a solution to somebody else’s problem solve yours as well.


With all of that out of the way, I love one thing about both Elixir & Ruby - the people. You rarely find such open and welcoming souls as amongst people attending conferences for either language. The honest love for learning, simple human kindness, and willingness to discuss are otherwise unheard of. Even when having arguments on such emotional subjects as the choice of the language, the discussion is respectful and factual. I thank very much my co-panelist for a great discussion Hubert, Andrzej, Robert and Maciej during the panel, and all the other people that spoke to me on the subject during the afterparties. Thank you.


Scaling RabbitMQ on a CoreOS cluster through Docker

<h2>Introduction</h2> <p>RabbitMQ provides, among other features, clustering capabilities. Using clustering, a group of properly configured hosts will behave the same as a single broker instance.</p> <p>All the nodes of a RabbitMQ cluster share the definition of vhosts, users, and exchanges but not queues. By default they physically reside on the node where they have been created, however as from version 3.6.1, the queue node owneriship can be configured using <a href="https://www.erlang-solutions.com/blog/take-control-of-your-rabbitmq-queues.html">Queue Master Location policies</a>. Queues are globally defined and reachable, by establishing a connection to any node of the cluster.</p> <p>Modern architectures often involve container based ways of scaling such as Docker . In this post we will see how to create a dynamic scaling RabbitMQ cluster using <a href="https://coreos.com/">CoreOS</a> and <a href="https://www.docker.com/">Docker</a>:</p> <p><img src="https://esl-website-production.s3.amazonaws.com/uploads/image/file/298/coreos_cluster.png" alt="Alternative Text"></p> <p>We will take you on a step by step journey from zero to the cluster.</p> <h2>Get ready</h2> <p>We are going to use different technologies, although we will not get into the details of all of them. For instance it is not required to have a deep CoreOS/Docker knowledge for the purpose of executing this test.</p> <p>It can be executed using your pc, and what you need is:</p> <ul> <li><a href="https://www.vagrantup.com/">Vagrant</a></li> <li><a href="https://www.virtualbox.org/">VirtualBox</a></li> <li><a href="https://git-scm.com/">Git</a> <br><br></li> </ul> <p>What we will do:</p> <ol> <li><a href="#configure-coreos-cluster-machines">Configure CoreOS cluster machines</a></li> <li><a href="#configure-docker-swarm">Configure Docker Swarm</a></li> <li><a href="#configure-rabbitmq-docker-cluster">Configure RabbitMQ docker cluster</a></li> </ol> <h2>Configure CoreOS cluster machines</h2> <p>First we have to configure the CoreOS cluster:</p> <p><strong>1.</strong> Clone the vagrant repository:</p> <pre><code class="language-zsh"><span style="color: #555555">$ </span>git clone https://github.com/coreos/coreos-vagrant <span style="color: #555555">$ </span><span style="color: #0086B3">cd </span>coreos-vagrant </code></pre> <p><strong>2.</strong> Use the user-data example file:</p> <pre><code class="language-zsh"><span style="color: #555555">$ </span>cp user-data.sample user-data </code></pre> <p><strong>3.</strong> Configure the cluster parameters:</p> <pre><code class="language-zsh"><span style="color: #555555">$ </span>cp config.rb.sample config.rb </code></pre> <p><strong>4.</strong> Open the file then uncomment <code>num_instances</code> and change it to 3, or execute:</p> <pre><code class="language-zsh"> sed -i.bk <span style="color: #d14">'s/$num_instances=1/$num_instances=3/'</span> config.rb </code></pre> <p><strong>5.</strong> Start the machines using <code>vagrant up</code>:</p> <pre><code class="language-zsh"><span style="color: #555555">$ </span>vagrant up Bringing machine <span style="color: #d14">'core-01'</span> up with <span style="color: #d14">'virtualbox'</span> provider... Bringing machine <span style="color: #d14">'core-02'</span> up with <span style="color: #d14">'virtualbox'</span> provider... Bringing machine <span style="color: #d14">'core-03'</span> up with <span style="color: #d14">'virtualbox'</span> provider… </code></pre> <p><strong>6.</strong> Add the ssh key:</p> <p><code>ssh-add ~/.vagrant.d/insecure_private_key</code></p> <p><strong>7.</strong> Use vagrant <code>ssh core-XX -- -A</code> to login, ex:</p> <pre><code class="language-zsh"><span style="color: #555555">$ </span>vagrant ssh core-01 -- -A <span style="color: #555555">$ </span>vagrant ssh core-02 -- -A <span style="color: #555555">$ </span>vagrant ssh core-03 -- -A </code></pre> <p><strong>8.</strong> Test your CoreOS cluster, login to the machine core-01:</p> <p><code>$ vagrant ssh core-01 -- -A</code></p> <p>Then</p> <pre><code class="language-zsh">core@core-01 ~ <span style="color: #008080">$ </span>fleetctl list-machines MACHINE IP METADATA 5f676932... - 995875fc... - e4ae7225... - </code></pre> <p><strong>9.</strong> Test the etcd service:</p> <pre><code class="language-zsh">core@core-01 ~ <span style="color: #008080">$ </span>etcdctl <span style="color: #0086B3">set</span> /my-message <span style="color: #d14">"I love Italy"</span> I love Italy </code></pre> <p><strong>10.</strong> Login to vagrant ssh core-02:</p> <pre><code class="language-zsh"><span style="color: #555555">$ </span>vagrant ssh core-02 -- -A core@core-02 ~ <span style="color: #008080">$ </span>etcdctl get /my-message I love Italy </code></pre> <p><strong>11.</strong> Login to vagrant ssh core-03:</p> <pre><code class="language-zsh">vagrant ssh core-02 -- -A core@core-03 ~ <span style="color: #008080">$ </span>etcdctl get /my-message I love Italy </code></pre> <p>As result you should have:</p> <p><img src="https://esl-website-production.s3.amazonaws.com/uploads/image/file/297/etcd.png" alt="Alternative Text"></p> <p><strong>12.</strong> Test Docker installation using <code>docker -v</code> :</p> <pre><code class="language-zsh">core@core-01 ~ <span style="color: #008080">$ </span>docker -v Docker version 1.12.3, build 34a2ead </code></pre> <p><strong>13.</strong> (Optional step) Run the first image with docker run :</p> <pre><code class="language-zsh">core@core-01 ~ <span style="color: #008080">$ </span> docker run ubuntu /bin/echo <span style="color: #d14">'Hello world'</span> … Hello world </code></pre> <p>The CoreOS the cluster is ready, and we are able to run Docker inside CoreOS. Let’s test our first RabbitMQ docker instance:</p> <p><strong>14.</strong> Execute the official RabbitMQ docker image:</p> <pre><code class="language-zsh">core@core-01 ~ <span style="color: #008080">$ </span>docker run -d --hostname my-rabbit --name first_rabbit -p 15672:15672 rabbitmq:3-management </code></pre> <p><strong>15.</strong> Check your <code>eth1</code> vagrant IP (used to access the machine) :</p> <pre><code class="language-zsh">core@core-01 ~ <span style="color: #008080">$ </span>ifconfig | grep -A1 eth1 eth1: <span style="color: #008080">flags</span><span style="color: #000000;font-weight: bold">=</span>4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt; mtu 1500 inet netmask broadcast </code></pre> <p>Go to <code>http://&lt;your_ip&gt;:15672/#/</code> in this case: <a href=""></a>.</p> <p>You should see the RabbitMQ management UI as (<code>guest</code> <code>guest</code> ):</p> <p><img src="https://esl-website-production.s3.amazonaws.com/uploads/image/file/296/http_ui.jpg" alt="Alternative Text"></p> <p>In order to scale up the node above, we should run another container with <code>--link</code> parameter and execute <code>rabbitmqctl join_cluster rabbit@&lt;docker_host_name&gt;</code>. In order to scale down we should stop the second container and execute <code>rabbitmqctl forget_cluster_node rabbit@&lt;docker_host_name&gt;</code>.</p> <p>Turn into more positive. e.g. This is one of the areas where further enhancements on automation would be helpful.</p> <p>We need docker orchestration to configure and manage the docker cluster. Among the available orchestration tools, we have chosen <a href="https://www.docker.com/products/docker-swarm">Docker swarm</a>.</p> <p>Before going ahead we should remove all the running containers:</p> <pre><code class="language-zsh">core@core-01 ~ <span style="color: #008080">$ </span>docker rm -f <span style="color: #000000;font-weight: bold">$(</span>docker ps -a -q<span style="color: #000000;font-weight: bold">)</span> </code></pre> <p>And the images:</p> <pre><code class="language-zsh">core@core-01 ~ <span style="color: #008080">$ </span>docker rmi -f <span style="color: #000000;font-weight: bold">$(</span>docker images -q<span style="color: #000000;font-weight: bold">)</span> </code></pre> <h2>Configure Docker swarm</h2> <p>Docker Swarm is the native clustering mechanism for Docker. We need to initialize one node and join the other nodes, as:</p> <p><strong>1.</strong> Swarm initialization: to the node-01 execute <code>docker swarm init --advertise-addr</code>.</p> <p><code>docker swarm init</code> automatically generates the command (with the token) to join other nodes to the cluster, as:</p> <pre><code class="language-zsh">core@core-01 ~ <span style="color: #008080">$ </span>docker swarm init --advertise-addr Swarm initialized: current node <span style="color: #000000;font-weight: bold">(</span>2fyocfwfwy9o3akuf6a7mg19o<span style="color: #000000;font-weight: bold">)</span> is now a manager. To add a worker to this swarm, run the following <span style="color: #0086B3">command</span>: docker swarm join <span style="color: #d14">\</span> --token SWMTKN-1-3xq8o0yc7h74agna72u2dhqv8blaw40zs1oow9io24u229y22z-4bysfgwdijzutfl6ydguqdu1s <span style="color: #d14">\</span> </code></pre> <p>Docker swarm cluster is is composed by leader node and worker nodes.</p> <p><strong>2.</strong> Join the core-02 to the cluster <code>docker swarm join --token &lt;token&gt; &lt;ip&gt;:&lt;port&gt;</code> (you can copy and paste the command generated to the step 1) :</p> <p>In this case:</p> <pre><code class="language-zsh">core@core-02 ~ <span style="color: #008080">$ </span>docker swarm join <span style="color: #d14">\</span> --token SWMTKN-1-3xq8o0yc7h74agna72u2dhqv8blaw40zs1oow9io24u229y22z-4bysfgwdijzutfl6ydguqdu1s <span style="color: #d14">\</span> This node joined a swarm as a worker. </code></pre> <p><strong>3.</strong> Join the core-03 to the cluster <code>docker swarm join --token &lt;token&gt; &lt;ip&gt;:&lt;port&gt;</code> :</p> <pre><code class="language-zsh">core@core-03 ~ <span style="color: #008080">$ </span>docker swarm join <span style="color: #d14">\</span> --token SWMTKN-1-3xq8o0yc7h74agna72u2dhqv8blaw40zs1oow9io24u229y22z-4bysfgwdijzutfl6ydguqdu1s <span style="color: #d14">\</span> This node joined a swarm as a worker. </code></pre> <p><strong>4.</strong> Check the swarm cluster using <code>docker node ls</code> :</p> <pre><code class="language-zsh">core@core-01 ~ <span style="color: #008080">$ </span>docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 07m3d8ipj2kgdiv9jptv9k18a core-02 Ready Active 2fyocfwfwy9o3akuf6a7mg19o <span style="color: #000000;font-weight: bold">*</span> core-01 Ready Active Leader 8cicxxpn5f86u3roembijanig core-03 Ready Active </code></pre> <h2>Configure RabbitMQ docker cluster</h2> <p>There are different ways to create a RabbitMQ cluster:</p> <ul> <li>Manually with <code>rabbitmqctl</code></li> <li>Declaratively by listing cluster nodes in a config file</li> <li>Declaratively with <code>rabbitmq-autocluster</code> (a plugin)</li> <li>Declaratively with <code>rabbitmq-clusterer</code> (a plugin) <br><br></li> </ul> <p>To create the cluster we use the rabbitmq-autocluster plugin since it supports different services discovery such as <a href="https://www.consul.io/">Consul</a>, <a href="https://github.com/coreos/etcd">etcd2</a>, DNS, AWS EC2 tags or <a href="https://aws.amazon.com/autoscaling/">AWS Autoscaling Groups</a>.</p> <p>We decided to use etcd2, this is why we tested it on <a href="#configure-coreos-cluster-machines"><strong>Configure CoreOS cluster machines</strong></a> see step 8.</p> <h4>Ready to the final round, create the RabbitMQ cluster.</h4> <p><strong>1.</strong> Create A Docker network:</p> <pre><code class="language-zsh"><span style="color: #555555">core@core-01~$ </span>docker network create --driver overlay rabbitmq-network </code></pre> <p>The swarm makes the overlay network available only to nodes in the swarm that require it for a service</p> <p><strong>2.</strong> Create a Docker service:</p> <pre><code class="language-zsh">core@core-01 ~ <span style="color: #008080">$ </span>docker service create --name rabbitmq-docker-service <span style="color: #d14">\</span> -p 15672:15672 -p 5672:5672 --network rabbitmq-network -e <span style="color: #008080">AUTOCLUSTER_TYPE</span><span style="color: #000000;font-weight: bold">=</span>etcd <span style="color: #d14">\</span> -e <span style="color: #008080">ETCD_HOST</span><span style="color: #000000;font-weight: bold">=</span><span style="color: #000000;font-weight: bold">${</span><span style="color: #008080">COREOS_PRIVATE_IPV4</span><span style="color: #000000;font-weight: bold">}</span> -e <span style="color: #008080">ETCD_TTL</span><span style="color: #000000;font-weight: bold">=</span>30 -e <span style="color: #008080">RABBITMQ_ERLANG_COOKIE</span><span style="color: #000000;font-weight: bold">=</span><span style="color: #d14">'ilovebeam'</span> <span style="color: #d14">\</span> -e <span style="color: #008080">AUTOCLUSTER_CLEANUP</span><span style="color: #000000;font-weight: bold">=</span><span style="color: #0086B3">true</span> -e <span style="color: #008080">CLEANUP_WARN_ONLY</span><span style="color: #000000;font-weight: bold">=</span><span style="color: #0086B3">false </span>gsantomaggio/rabbitmq-autocluster </code></pre> <p><strong>Note</strong>: The first time you have to wait a few seconds.</p> <p><strong>3.</strong> Check the service list using <code>docker service ls</code></p> <p><strong>4.</strong> You can check the RabbitMQ instance running on <code>http://&lt;you_vagrant_ip&gt;:15672/#/</code> most likely <a href=""></a></p> <p><strong>5.</strong> Scale your cluster, using <code>docker service scale</code> as:</p> <pre><code class="language-zsh">core@core-01 ~ <span style="color: #008080">$ </span>docker service scale rabbitmq-docker-service<span style="color: #000000;font-weight: bold">=</span>5 rabbitmq-docker-service scaled to 5 </code></pre> <h4>Congratulations!! You just scaled your cluster to 5 nodes!</h4> <p>Since the 3 CoreOS machine are in cluster, you can use all the 3 machines to access the cluster, as:</p> <ul> <li><a href=""></a></li> <li><a href=""></a></li> <li><a href=""></a><br> <br><br> Where you should have:</li> </ul> <p><img src="https://esl-website-production.s3.amazonaws.com/uploads/image/file/295/ui_cluster.jpg" alt="Alternative Text"></p> <p><strong>6.</strong> Check the cluster status on the machine:</p> <pre><code class="language-zsh">core@core-01 ~ <span style="color: #008080">$ </span>docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b480a09ea6e2 gsantomaggio/rabbitmq-autocluster:latest <span style="color: #d14">"docker-entrypoint.sh"</span> 1 seconds ago Up Less than a second 4369/tcp, 5671-5672/tcp, 15671-15672/tcp, 25672/tcp rabbitmq-docker-service.3.1vp3o2w1eelzbpjngxncb9wur aabb62882b1b gsantomaggio/rabbitmq-autocluster:latest <span style="color: #d14">"docker-entrypoint.sh"</span> 6 seconds ago Up 5 seconds 4369/tcp, 5671-5672/tcp, 15671-15672/tcp, 25672/tcp rabbitmq-docker-service.1.f2larueov9lk33rwzael6oore </code></pre> <p>Same to the other nodes, you have more or less the same number of containers.</p> <p>Let’s see now in detail the <code>docker service</code> parameters:</p> <style type="text/css"> .tg {border-collapse:collapse;border-spacing:0;} .tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 10px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;} .tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 10px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;} .tg .tg-9hbo{font-weight:bold;vertical-align:top} .tg .tg-yw4l{vertical-align:top} </style> <table class="tg" style="undefined;table-layout: fixed; width: 806px"> <colgroup> <col style="width: 300px"> <col style="width: 560px"> </colgroup> <tr> <th class="tg-9hbo">Command</th> <th class="tg-9hbo">Description</th> </tr> <tr> <td class="tg-yw4l"><code><FONT style="BACKGROUND-COLOR: #F7F7F7">docker service create</font></code></td> <td class="tg-yw4l">Create a docker service</td> </tr> <tr> <td class="tg-yw4l"><code><FONT style="BACKGROUND-COLOR: #F7F7F7">--name rabbitmq-docker-service</font></code></td> <td class="tg-yw4l">Set the service name, you can check the services list using <code><FONT style="BACKGROUND-COLOR: #F7F7F7">docker service ls</font></code></td> </tr> <tr> <td class="tg-yw4l"><code><FONT style="BACKGROUND-COLOR: #F7F7F7">-p 15672:15672 -p 5672:5672</font></code></td> <td class="tg-yw4l">map the RabbitMQ standard ports, 5672 is the AMQP port and 15672 is the Management_UI port</td> </tr> <tr> <td class="tg-yw4l"><code><FONT style="BACKGROUND-COLOR: #F7F7F7">--network rabbitmq-network</font></code></td> <td class="tg-yw4l">Choose the docker network</td> </tr> <tr> <td class="tg-yw4l"><code><FONT style="BACKGROUND-COLOR: #F7F7F7">-e RABBITMQ_ERLANG_COKIE='ilovebeam'</font></code></td> <td class="tg-yw4l">Set the same erlang.cookie value to all the containers, needed by RabbitMQ to create a cluster. With different erlang.cookie it is not possible create a cluster.</td> </tr> </table><br><br> <p>Next are the auto-cluster parameters:</p> <style type="text/css"> .tg {border-collapse:collapse;border-spacing:0;} .tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 10px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;} .tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 10px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;} .tg .tg-9hbo{font-weight:bold;vertical-align:top} .tg .tg-yw4l{vertical-align:top} </style> <table class="tg" style="undefined;table-layout: fixed; width: 860px"> <colgroup> <col style="width: 300px"> <col style="width: 560px"> </colgroup> <tr> <th class="tg-9hbo">Command</th> <th class="tg-9hbo">Description</th> </tr> <tr> <td class="tg-yw4l"><code>-e AUTOCLUSTER_TYPE=etcd</code></td> <td class="tg-yw4l">set the service discovery backend = etcd</td> </tr> <tr> <td class="tg-yw4l"><code>-e ETCD_HOST=${COREOS_PRIVATE_IPV4}</code></td> <td class="tg-yw4l">The containers need to know the etcd2 ip. After executing the service you can query the database using the command line etcdctl ex: <code>etcdctl ls /rabbitmq -recursive</code> or using the http API ex: <code>curl -L</code></td> </tr> <tr> <td class="tg-yw4l"><code>-e ETCD_TTL=30</code></td> <td class="tg-yw4l">Used to specify how long a node can be down before it is removed from etcd's list of RabbitMQ nodes in the cluster</td> </tr> <tr> <td class="tg-yw4l"><code>-e AUTOCLUSTER_CLEANUP=true</code></td> <td class="tg-yw4l"><li>Enables a periodic check that removes any nodes that are not alive in the cluster and no longer listed in the service discovery list.</li><br><li>Scaling down removes one or more containers, the nodes will be removed from <code>etcd</code> database, see, for example: <code>docker service scale rabbitmq-docker-service=4</code></li></td> </tr> <tr> <td class="tg-yw4l"><code>-e CLEANUP_WARN_ONLY=false</code></td> <td class="tg-yw4l">If set, the plugin will only warn about nodes that it would cleanup. <code>AUTOCLUSTER_CLEANUP</code> requires <code>CLEANUP_WARN_ONLY=false</code> to work.</td> </tr> <tr> <td class="tg-yw4l">gsantomaggio/rabbitmq-autocluster</td> <td class="tg-yw4l">The official docker image does not support the auto-cluster plugin, in my personal opinion they should. I created a docker image and registered it on docker-hub.</td> </tr> </table><br><br> <p><code>AUTOCLUSTER_CLEANUP</code> to true removes the node automatically, if <code>AUTOCLUSTER_CLEANUP</code> is false you need to remove the node manually.</p> <p><strong>Scaling down and <code>AUTOCLUSTER_CLEANUP</code> can be very dangerous</strong>, if there are not <a href="https://www.rabbitmq.com/ha.html">HA policies</a> all the queues and messages stored to the node will be lost. To enable HA policy you can use the command line or the HTTP API, in this case the easier way is the HTTP API, as:</p> <pre><code class="language-zsh">curl -u guest:guest -H <span style="color: #d14">"Content-Type: application/json"</span> -X PUT <span style="color: #d14">\</span> -d <span style="color: #d14">'{"pattern":"","definition":{"ha-mode":"exactly","ha-params":3,"ha-sync-mode":"automatic"}}'</span> <span style="color: #d14">\</span> </code></pre> <p><strong>Note</strong>: Enabling the mirror queues across all the nodes could impact the performance, especially when the number of the nodes is undefined. Using <code>&quot;ha-mode&quot;:&quot;exactly&quot;,&quot;ha-params&quot;:3</code> we enable the mirror only for 3 nodes. So scaling down should be done for one node at time, in this way RabbitMQ can move the mirroring to other nodes.</p> <h2>Conclusions</h2> <p>RabbitMQ can easily scale inside Docker, each RabbitMQ node has its own files and does not need to share anything through the file system. It fits perfectly with containers.</p> <p>This architecture implements important features as:</p> <ul> <li>Round-Robin connections</li> <li>Failover cluster machines/images</li> <li>Portability</li> <li>Scaling in term of CoreOS nodes and RabbitMQ nodes <br><br></li> </ul> <p>Scaling RabbitMQ on Docker and CoreOS is very easy and powerful, we are testing and implementing the same environment using different orchestration tools and service discovery tools as kubernetes, consul etc, by the way <strong>we consider this architecture as experimental</strong>.</p> <p>Here you can see the final result:</p> <p><a href="https://www.youtube.com/embed/i9KH_FWJ2ek" target="_blank"><img src="http://img.youtube.com/vi/i9KH_FWJ2ek/0.jpg" alt="RabbitMQ on CoreOS" width="240" height="180" border="10" /></a></p> <p>Enjoy!</p> <p>At Erlang Solutions we can help you design, implement, operate and optimise a system utilising RabbitMQ. We provide tier 3 (most advanced) level RabbitMQ support for Pivotal`s customers and we work closely with Pivotal support tier 1 and 2. We also offer RabbitMQ customisation if your system goes beyond the typical requirements, and bespoke support for such implementations.</p> <p><a href="https://www.erlang-solutions.com/products/rabbitmq.html">Learn more about RabbitMQ</a>, Erlang Solutions’ the only fast and dependable open-source message server you’ll ever need. <br></p>


Reporting a Security Issue in Erlang/OTP

img src=http://www.erlang.org/upload/news/

Reporting a Security Issue in Erlang/OTP

Please follow this document in order to report the issues regarding security in Erlang/OTP. Please do not create a public issue for a security issue.

When should you report a security issue?

The risk level is often determined by a product of the impact once exploited, and the probability of exploitation occurring. In other words, if a bug can cause great damage, but it takes highest privilege to exploit the bug, then the bug is not a high risk one. Similarly, if the bug is easily exploitable, but its impact is limited, then it is not a high risk issue either.


There is not any hard and fast rule to determine if a bug is worth reporting as a security issue to erlang-security [at] erlang [dot] org. A general rule is that an attack by someone that has no access to the Erlang application or its system can affect the confidentiality, integrity and availability.


What happens after the report?

All security bugs in the Erlang/OTP distribution should be reported to erlang-security [at] erlang [dot] org. Your report will be handled by a small security team at the OTP team. Your email will be acknowledged as soon as we start handling the issue.


Please use a descriptive email title for your report. After the initial response to your report, the security team will keep you updated on the progress and decision being made towards a fix and release announcement.


Flagging Existing Issues as Security-related

If you believe that an existing public issue on bugs.erlang.org is security-related, we ask that you send an email to erlang-security [at] erlang [dot] org. The email title should contain the issue ID on bugs.erlang.org (e.g. Flagging security issue ERL-001). Please include a short description to motivate why it should be handled according to the security policy.


Erlang OS X Installer: Official Release

<h1>An update to the Erlang Solutions OTP Installer</h1> <p>With <a href="https://www.erlang-solutions.com/resources/download.html">Erlang/OTP 19.3</a> coming out today, we’ve decided to release our new <a href="https://packages.erlang-solutions.com/os-x-installer/ErlangInstaller1.0.0.dmg">Erlang Solutions OS X Installer</a>.</p> <p>From now on, the <a href="https://www.erlang-solutions.com/blog/erlang-installer-a-better-way-to-use-erlang-on-osx.html">old installer</a> should be considered deprecated. During the Erlang/OTP 20 release cycle, it will stop being provided for new versions. The new installer is now our recommended option.</p> <p>Fear not, though! The new installer has a whole slew of options that are likely to please any serious coder. I’ve summarised them for you in this post.</p> <h2>Motivation for changes</h2> <p>The previous iteration of the Installer supports auto-updating when a new version comes out.</p> <p><img src="https://s3.amazonaws.com/uploads.hipchat.com/15025/4746287/5ccMF6ISxlG6Bwk/content_old-installer-prompt.png" alt="Alternative Text" title="Old prompt"></p> <p>This can be a very useful feature for the people who like to stay on the cutting edge, but a lot of serious developers like to stick to an older version they know is supported by the software they are using.</p> <p>Moreover, looking around our own company, we have noticed people forgo downloading our own Installer because it does not offer an indispensable feature: the ability to quickly switch between different versions.</p> <p>To many devs, it is important to be able to switch to an old Erlang version in order to test a patch on a legacy system, before quickly popping back to 19.0. Some people keep 4-5 different versions of Erlang on their machine!</p> <p>This is why we’ve decided to change the ESL Installer to allow this kind of feature, while incorporating changes that will make the app feel more modern.</p> <h2>Installation</h2> <p>Installation of the new ESL Installer is as easy as downloading and then drag&amp;dropping it into your Applications folder.</p> <p><img src="https://s3.amazonaws.com/uploads.hipchat.com/15025/4746287/giiosp4WgGGkIZe/content_tray-preview.png" alt="Alternative Text" title="Tray preview"></p> <p>The app will show up in the tray, and its preferences can be accessed through the OS X System Preferences.</p> <h2>Usage</h2> <p>When clicked, the tray icon expands a menu that allows you to download releases, start a shell with one of the releases you already downloaded, force a check for new releases or updates, and to quickly access the preferences.</p> <p><img src="https://s3.amazonaws.com/uploads.hipchat.com/15025/4746287/PthmnVgCwmbziJ4/content_unrolled-tray.png" alt="Alternative Text" title="Unrolled tray"></p> <h3>Downloading releases</h3> <p>The Releases tab of the installer allows you install or uninstall various versions of Erlang.</p> <p><img src="https://s3.amazonaws.com/uploads.hipchat.com/15025/4746287/u7AVmkgcUougphZ/content_download-tab.png" alt="Alternative Text" title="New download tab"></p> <h3>Settings</h3> <p>The General tab of the Installer allows you to setup your preferences, including the default terminal and release you’d like to use, automatic checks for updates and automatically starting the Installer on system boot, all to ensure you stay up to date with new things in the world of Erlang.</p> <p><img src="https://s3.amazonaws.com/uploads.hipchat.com/15025/4746287/xtuOkcAELIUcxSw/content_general-tab.png" alt="Alternative Text" title="New general tab"></p> <h2>Other perks</h2> <p>The new Installer app should now look more similar to the other OS X apps you know and love, and instead of an ugly, annoying popup you should now be getting much less obstructive notifications.</p> <h2>Download link</h2> <p>To try out the new Installer, go to the <a href="https://packages.erlang-solutions.com/os-x-installer/ErlangInstaller1.0.0.dmg">direct link</a> on <a href="https://www.erlang-solutions.com/resources/download.html">our webpage</a>.</p> <h2>In conclusion</h2> <p>We at Erlang Solutions and Inaka Networks hope you have an excellent experience with the new Installer! Please report any feedback <a href="mailto:packages@erlang-solutions.com">here</a>, or on <a href="https://twitter.com/ErlangSolutions">Twitter</a>.</p>


Mocks and Explicit Contracts: In Practice w/ Elixir

Writing tests for your code is easy. Writing good tests is much harder. Now throw in requests to external APIs that can return (or not return at all!) a myriad of different responses and we’ve just added a whole new layer of possible cases to our tests. When it comes to the web, it’s easy to overlook the complexity when working with an external API. It’s become so second nature that writing a line of code to initiate an HTTP request can become as casual as any other line of code within your application. However that’s not always the case.

We recently released the first version of our self-service debugging tool. You can see it live at https://debug.spreedly.com. The goal we had in mind for the support application was to more clearly display customer transaction data for debugging failed transactions. We decided to build a separate web application to layer on top of the Spreedly API which could deal with the authentication mechanics as well as transaction querying to keep separate concerns between our core transactional API and querying and displaying data. I should also mention that the support application is our first public facing Elixir application in production!

Since this was a separate service, it meant that we needed to make HTTP requests from the support application to our API in order to pull and display data. Although we had solid unit tests with example responses we’d expect from our API, we wanted to also incorporate remote tests which actually hit the production API so we could occasionally do a real world full stack test. Remote tests aren’t meant to be executed every test run, only when we want that extra ounce of confidence.

As we started looking into testing Elixir applications against external dependencies, we came across José Valim’s excellent blog post, Mocks and Explicit Contracts. If you haven’t read it already, you should check it out. It has a lot of great thoughts and will give this post a little more context. It seemed like a solid approach for building less brittle tests so we thought we implement it ourselves and see how well it would work in reality and if it could provide what we needed to include remote tests along side our unit tests. Here’s how our experience with this approach went…

Pluggable clients

The first thing we needed to do was update the Spreedly API client in the support application to be dynamically selected instead of hardcoded. In production we want to build real HTTP requests, but for unit tests we want to replace that module with a mock module which just returns simulated responses.

# Module wrapping our API requests for transactions
defmodule SupportApp.Transaction do
  @core Application.get_env(:support_app, :core)

  def fetch(token, user, secret) do
    @core.transaction(user, token, access_secret)

In line 3 above, you can see that the @core constant is being dynamically set by doing a Application configuration lookup for the value. In our particular case, the lookup will return the module to use for getting Spreedly transaction data.

Once we’ve got that set, now we can configure the module to use in our application config. Notice that the module returned on lookup changes depending on the environment we’re currently running. We really like the explicitness here!

# In config/test.exs
config :support_app, :core, SupportApp.Core.Mock

# In config/config.exs
config :support_app, :core, SupportApp.Core.Api

Enforceable interfaces

So, what do those two modules look like anyway? Well, from our Transaction module above we know that both the HTTP client and the mock will need to have a transaction/3 function which will take care of getting us a transaction whether it be from the Spreedly API or a simulated one we build ourselves.

So in production, we’re using wrapping HTTPotion to make requests.

defmodule SupportApp.Core.Api do
  @behaviour SupportApp.Core
  def transaction(key, token, secret) do
    path = "..."
    options = ...

    %HTTPotion.Response{body: body} = HTTPotion.get(path, options)
    case Poison.Parser.parse!(body) do

However, in unit tests we’re going to use a mock module instead which just returns a straight Map of what we’d expect from a production request.

defmodule SupportApp.Core.Mock do
  @behaviour SupportApp.Core

  def transaction(_key, "nonexistent", _), do: nil
  def transaction(_key, token, _) do
      "transaction": %{
        "succeeded": true,
        "state": "succeeded",
        "token": "7icIbTtxupZpY8SKxwlUAKq8Qiw",
        "message_key": "messages.transaction_succeeded",
        "message": "Succeeded!",

Two things to notice about the modules above:

  1. For the mock module, we pattern matched on the function parameters to alter the response we want. This is incredibly useful when you want to test against different response scenarios since you can define another transaction/3 function with a specific parameter value and then have it return the response with appropriate test data. In our case, we wanted to also test when someone doesn’t enter a transaction token to search (see line 4 in the mock module above).

  2. In both the production and test modules, there’s a line beneath the module definition- @behaviour SupportApp.Core . By having a top level behaviour for these modules, we can be sure that our production API client module and our mock module adhere to the same interface. If we want to add another endpoint, start by adding it to the top level behaviour SupportApp.Core which will then trickle down to all modules and keep us explicitly defining our API contract so our production API client and mock client remain in step.

Here’s a snippet of our behaviour module that ensures all compliant behaviours have a transaction function with the current arity and argument types:

defmodule SupportApp.Core do
  use Behaviour

  @callback transaction(key :: String.t, token :: String.t, secret :: String.t) :: %{}

And that’s it for setup!

Test segmentation

Up to this point we’ve followed the approach as outlined by José’s blog post and created an explicit contract between our modules, allowing us to change underlying implementations depending on the environment we’re running in. That is, a mock module to be used during test and an Spreedly API client in production. However, our original plan was to include remote tests that actually hit our production API so how can we enable that in our tests?

Simple! In the same way we dynamically looked up the module we wanted to use, we can use the test setup block to pass the module we want to use in place of our external dependency client. So for the unit tests we have:

defmodule SupportApp.Core.MockTest do
  use ExUnit.Case, async: true

  setup do
      {:ok, core: Application.get_env(:support_app, :core)} # <-- See config/test.exs

  test "transaction", %{core: core} do
    %{"transaction" => transaction} = core.transaction(...)
    assert txn["token"]

But for our remote tests, we change our setup block to drop in a the real HTTP client wrapper. I should also note that we needed to handle creating production API credentials and making them available to our remote tests, but handling that is a bit out of scope for this post.

defmodule SupportApp.Core.ApiTest do
  use ExUnit.Case, async: true
  @moduletag :remote

  setup do
    {:ok, core: SupportApp.Core.Api} # <-- Force the module to be used

  test "transaction", %{core: core} do
   %{"transaction" => txn} = core.transaction(...)
    assert txn["token"]

Almost there! Since we don’t want to run our remote tests each time (they’re only around for that extra bit of confidence!) we can just use a @moduletag to exclude them by default and only run them if we explicitly say to do so. We added a line to our test_helper.exs :

ExUnit.configure exclude: [:remote]

Now to run the remote tests, just add an include flag:

$ mix test --include remote

We’re really happy with the setup so far and think it provides us both the confidence of real world full stack tests, with the flexibility of simulating responses within a mock.


Copyright © 2016, Planet Erlang. No rights reserved.
Planet Erlang is maintained by Proctor.