OTP Cheatsheet

= OTP Cheatsheet

link:https://stratus3d.com/otp-cheatsheet/[image:/images/posts/otp_cheatsheet/otp-cheatsheet-screenshot-resized.png[OTP cheatsheet webpage screenshot]]

A last year I link:/blog/2018/01/20/erlang-cheatsheet/[created an Erlang cheatsheet]. I goal was to keep it fairly simple and only include the things that I often forgot. I decided to limit it to a single page so I could print it out and pin it on the wall for reference. I’ve been doing a lot of Elixir recently, but the cheatsheet has still be useful to me and my coworkers. There are a lot of things I often have to lookup with working with supervisors and other OTP behaviors so this month I decided it would be nice to have a seperate cheatsheet for OTP. This new cheatsheet follows in the same vein as my Erlang cheatsheet. It is intended for those experienced with OTP and not beginners.

The cheatsheet is available for you to freely print or download at https://stratus3d.com/otp-cheatsheet/. The source code has been released under the MIT license and is available in the same repository as the Erlang cheatsheet on link:https://github.com/Stratus3D/erlang-cheatsheet[GitHub at Stratus3D/erlang-cheatsheet].

I’d really like to hear your thoughts on this new OTP cheatsheet. Is there something it is missing? Is there something that can be simplified or removed?

Let me know via link:/contact[email] or twitter. If you want to contribute directly feel free to create an issue or pull request on the link:https://github.com/Stratus3D/erlang-cheatsheet[GitHub repository].

== References

Permalink

Erlang OTP 22.0 is released

img src=http://www.erlang.org/upload/news/

OTP 22.0

Erlang/OTP 22 is a new major release with new features and improvements as well as incompatibilities.

For a deeper dive into the highlights of the OTP 22 release, you can read our blog here:

http://blog.erlang.org/OTP-22-Highlights/

Potential Incompatibilities

  • gen_* behaviours: If logging of the last N messages through sys:log/2,3 is active for the server, this log is included in the terminate report.
  • reltool: A new element, Opts, can now be included in a rel tuple in the reltool release specific configuration format: {rel, Name, Vsn, RelApps, Opts}.
  • All external pids/ports/refs created by erlang:list_to_pid and similar functions now compare equal to any other pid/port/ref with same number from that node.
  • The old legacy erl_interface library is deprecated as of OTP 22, and will be removed in OTP 23. This does not apply to the ei library.
  • VxWorks is deprecated as of OTP 22 and will be removed in OTP 23.

New Features

Erts:

  • Support for Erlang Distribution protocol to split the payload of large messages into several fragments.
  • ETS option write_concurrency now also effects and improves scalability of ordered_set tables.
  • The length/1 BIF used to calculate the length of the list in one go without yielding, even if the list was very long. Now it yields when called with long lists.
  • A new (still experimental) module socket is introduced. It is implemented as a NIF and the idea is that it shall be as "close as possible" to the OS level socket interface.
  • Added the NIF function enif_term_type, which helps avoid long sequences of enif_is_xyz by returning the type of the given term. This is especially helpful for NIFs that serialize terms, such as JSON encoders, where it can improve both performance and readability.

Compiler:

  • The compiler has been rewritten to internally use an intermediate representation based on Static Single Assignment (SSA). The new intermediate representation makes more optimizations possible.
    • The binary matching optimizations are now applicable in many more circumstances than before.
    • Type optimizations are now applied across local function calls, and will remove a lot more redundant type tests than before.
  • All compiler options that can be given in the source file can now be given in the option list on the command line for erlc.
  • In OTP 22, HiPE (the native code compiler) is not fully functional. The reasons for this are new BEAM instructions for binary matching that the HiPE native code compiler does not support. If erlc is invoked with the +native option, and if any of the new binary matching instructions are used, the compiler will issue a warning and produce a BEAM file without native code.

Standard libraries:

  • Cover now uses the counters module instead of ets for updating counters. The new function cover:local_only/0 allows running Cover in a restricted but faster local-only mode. The increase in speed will vary depending on the type of code being cover-compiled, as an example the compiler test suite runs more than twice as fast with the new Cover.
  • A simple socket API is provided through the socket module. This is a low level API that does *not* replace gen_[tcp|udp|sctp]. It is intended to *eventually* replace the inet driver. It also provides a basic API that facilitates the implementation of other protocols than TCP, UDP and SCTP. Known issues are; No support for the Windows OS (currently), a small term leakage. This feature will be classed as experimental in OTP 22.
  • SSL: now uses the new logger API, including log levels and verbose debug logging.
  • SSL: Basic support for TLS 1.3 Server for experimental use.
  • crypto: The new hash_info/1 and cipher_info/1 functions returns maps with information about the hash or cipher in the argument.

 

For more details see
http://erlang.org/download/otp_src_22.0.readme

Pre built versions for Windows can be fetched here:
http://erlang.org/download/otp_win32_22.0.exe
http://erlang.org/download/otp_win64_22.0.exe

Online documentation can be browsed here:
http://erlang.org/doc/search/

The Erlang/OTP source can also be found at GitHub on the official Erlang repository:

https://github.com/erlang/otp

OTP-22.0

 

Thank you for all your contributions!

Permalink

Catch Up

While the three years since the last blog post might have flown by, and it may not seem like much is going on in LFE land, there has been :-) There have been on-going discussions in Github tickets as well as some chatter on the mail list, but there has also been some quiet hacking going on in the project itself.

  • 246 commits were made to the main LFE development branch (177 in the remainder of 2016, 57 in 2017, 11 in 2018, and 1 in 2019)
  • Robert's been giving lots of thought to an LFE based upon Erlang's AST instead of Core Erlang (see the erlang-ast branch)
  • Two core library modules have been added to LFE (and written in LFE, not Erlang!): Common Lisp and Clojure compatibility libraries
  • The Github project has gotten a refresher with new tags (including status) and updates, new questions answered
  • There are also several new pull requests that have been opened with about 40 new commits for 2019
  • The LFE Dockerfiles have been updated to be based upon the official Erlang Docker images
  • The LFE blog repo has received a significant update that makes it easier for new posts to be created

Lots of spicey goodness, and maybe even some stuff that will end up as some more new blog posts for 2019 ;-)

Stay tuned …

Permalink

OTP 22 Highlights

OTP 22 has just been released. It has been a long process with three release candidates before the final release. We decided this year to try to get one month more testing of the major release and I think that the extra time has paid off. We’ve received many bug reports from the community about large and small bugs that our internal tests did not find.

This blog post will describe some highlights of what is released in OTP 22 and in OTP 21 maintenance patches.

You can download the readme describing the changes here: OTP 22 Readme. Or, as always, look at the release notes of the application you are interested in. For instance here: OTP 22 Erts Release Notes.

Compiler

In OTP 22 we have completely re-implemented the lower levels of the Erlang compiler. Before this change the Erlang compiler consisted of a number of IRs (intermediate representations):

Erlang AST -> Core Erlang -> Kernel Erlang -> Beam Asm

When compiling an Erlang module, the code is optimized and transformed between these different IRs. In OTP 22 we have almost removed the Kernel Erlang IR and added a new IR called Beam SSA. There are a series of blog posts describing this change in greater details for those that are interested.

With this change the compile pipeline now looks like this:

Erlang AST -> Core Erlang -> Kernel Erlang -> Beam SSA -> Beam Asm

Together with the SSA rewrite a number of new optimizations have been introduced. One such is strengthening of the bit syntax. Before the change, you had to be very careful with how you wrote your binary matching in order for the binary match context optimization to work properly. There were also scenarios where it was impossible to get the optimization to trigger at all. One place in Erlang/OTP where this had a great effect was the internal string:bin_search_inv_1 function used by string:lexemes/1 and other string functions. We can see the change in the benchmark graph below (where higher is better and the turquoise line in the OTP 22 branch):

String Lexemes OTP 22 benchmark

You can read more about this optimization in PR1958 and Retiring old performance pitfalls.

Another great optimization is PR2100 which makes the compiler’s type optimization pass work across functions within the same module. For instance in the code below:

-record(myrecord, {value}).

h(#myrecord{value=Val}) ->
    #myrecord{value=Val+1}.

i(A) ->
    #myrecord{value=V} = h(#myrecord{value=A}),
    V.

The new compiler is able to detect the type of the term passed as an argument to h/1 and also the return value of h/1 so it can eliminate the record checks completely. Looking at the BEAM code (produced by erlc -S) of the h/1 function we get:

OTP 21:

    {test,is_tagged_tuple,{f,9},[{x,0},2,{atom,myrecord}]}.
    {get_tuple_element,{x,0},0,{x,1}}.
    {get_tuple_element,{x,0},1,{x,2}}.
    {gc_bif,'+',{f,0},3,[{x,2},{integer,1}],{x,0}}.
    {test_heap,3,1}.

OTP 22:

    {get_tuple_element,{x,0},1,{x,0}}.
    {gc_bif,'+',{f,0},1,[{x,0},{integer,1}],{x,0}}.
    {test_heap,3,1}.

The is_tagged_tuple instruction has been completely eliminated and as an added bonus one get_tuple_element was also removed.

However, this is only the start and we are already looking into making even better optimizations for OTP 23, building on top of the SSA rewrite.

Socket

OTP 22 comes with a new experimental socket API. The idea behind this API is to have a stable intermediary API that users can use to create features that are not part of the higher-level gen APIs. We will also be using this API to re-implement the higher-level gen APIs in OTP 23.

Another aspect of the new socket API is that it can be used to greatly reduce the overhead that is inherent with using ports. I wrote this microbenchmark called gen_tcp2 to see what the difference could be.

Erlang/OTP 22 [erts-10.4] [source] [64-bit]

Eshell V10.4  (abort with ^G)
1> gen_tcp2:run().
              client             server
 gen_tcp:       12.4 ns/byte       12.4 ns/byte
gen_tcp2:        7.3 ns/byte        7.3 ns/byte
   ratio:       58.9 %             58.9%
ok

The results seem promising. The socket implementation of gen_tcp uses roughly 40% less CPU to send the same amount of packets. Of course, gen_tcp does a lot more than gen_tcp2 (dealing with lots of buffers, error cases and IPv6 to name a new), so it is not by any means a fair comparison. Though if an application can live without all the guarantees that come with gen_tcp, then using socket could be very good for performance.

Write concurrency in ordered_sets

PR1952 contributed by Kjell Winblad from Uppsala University makes it possible to do updated in parallel on ets tables of the type ordered_set. This has greatly increased the scalability of such ets tables that are the base for many applications, for instance, pg2 and the default ssl session cache.

Ordered Set Write Concurrency OTP 22 benchmark

In the benchmark above we can see that when enabling write_concurrency on an ordered_set table the operations per seconds possible on a 64 core machine is almost increased five times when write_concurrency is enabled. How much your application gains from this will depend on the ratio of read and write operations into the ordered_set. You can see the results of many more benchmarks here.

The data structure used to enable write_concurrency in the ordered_set is called contention adaptive search tree. In a nutshell, the data structure keeps a shadow tree that represents the locks needed to read or write a term in the tree. When conflicts between multiple writers happen, the shadow tree is updated to have more fine-grained locks for specific branches of the tree. You can read more about the details of the algorithm in A Contention Adapting Approach to Concurrent Ordered Sets.

The original PR had a few places where it still had to fall back to run sequentially, but that has been fixed in PR1997 and then further optimizations have been done in PR2190.

TLS Improvements

In OTP 21.3 the culmination of many optimizations in the ssl application was released. For certain use-cases, the overhead of a using TSL has been significantly reduced. For instance in this TSL distribution benchmark:

TLS Dist OTP 22 benchmark

The bytes per second that the Erlang distribution over TSL is able to send has been increased from 17K to about 80K, so more than 4 times as much data as before. The throughput gain above is mostly due to better batching of distribution messages which makes it so that ssl does not have to add a lot of padding to each message sent. So it does not translate over to using ssl directly but is still a very nice performance improvement.

In OTP 22 the logging facility for ssl has been greatly improved and there is now basic server support for TLSv1.3. In order to work with TLSv1.3 you need to install an OpenSSL version that supports TLSv1.3 (for instance 1.1.1b), compile Erlang/OTP using that OpenSSL version and generate the correct certificates. Then we can start a TLSv1.3 server like this:

LOpts = [{certfile, "tls_server_cert.pem"},
	     {keyfile, "tls_server_key.pem"},
	     {versions, ['tlsv1.3']},
	     {log_level, debug}
	    ],
{ok, LSock} = ssl:listen(8443, LOpts),
{ok, CSock} = ssl:transport_accept(LSock),
{ok, S} = ssl:handshake(CSock).

And use the OpenSSL client to connect:

openssl s_client -debug -connect localhost:8443 \
  -CAfile tls_client_cacerts.pem \
  -tls1_3 -groups P-256:X25519

This will produce a huge amount of logs, but somewhere in there we can see this in Erlang:

<<< TLS 1.3 Handshake, ClientHello

and this in OpenSSL:

New, TLSv1.3, Cipher is TLS_AES_256_GCM_SHA384

which means that we have successfully created a new TLSv1.3 connection. If you want to duplicate what I’ve done you can follow these instructions.

Not all features of TLSv1.3 have been implemented, you can see which parts of the RFCs that are missing in the ssl application’s Standard Complience documentation.

Fragmented distribution messages

In order to deal with the head of line blocking caused by sending very large messages over Erlang Distribution, we have added fragmentation of distribution messages in OTP 22. This means that large messages will now be split up into smaller fragments allowing smaller messages to be sent without being blocked for a long time.

If we run the code below that does small rpc calls every 100ms millisecond and concurrently sends many 1/2 GB terms.

1> spawn(fun() ->
           (fun F(Max) ->
             {T, _} = timer:tc(fun() ->
                 rpc:call(RemoteNode, erlang, length, [[]])
               end),
             NewMax = lists:max([Max, T]),
             [io:format("Max: ~p~n",[NewMax]) || NewMax > Max],
             timer:sleep(100),
             F(NewMax)
           end)(0)
         end).
2> D = lists:duplicate(100000000,100000000),
   [{kjell, RemoteNode} ! D || _ <- lists:seq(1,100)],
   ok.

Using two of our test machines I get a max latency of about 0.4 seconds on OTP 22, whereas on OTP 21 the max latency is around 50 seconds. So with the network at our test site the max latency is decreased by roughly 99%, which is a nice improvement.

Counter/Atomics and persistent_terms

Three new modules, counters, atomics, and persistent_term, were added in OTP 21.2. These modules make it possible for the user to access low-level primitives of the runtime to make some spectacular performance improvements.

For instance, the cover tool was recently re-written to use counters and persistent_term. Previously it used a bunch of ets tables to keep the counters for when the code was executed, but now it uses counters and the overhead of running cover has decreased by up to 80%.

persistent_term is adding run-time support for mochiglobal and similar tools. It makes it possible to very efficiently access data globally but at the cost of making updates very expensive. In Erlang/OTP we so far use it to optimize logger backends but the use cases are numerous.

A fun (and possibly useful) use case for atomics is to create a shared mutable bit-vector. So, now we can spawn 100 processes and play flip that bit with each other:

BV = bit_vector:new(80),
[spawn(fun F() ->
            bit_vector:flip(BV, rand:uniform(80)-1),
            F()
          end) || _ <- lists:seq(1,100)],
timer:sleep(1000),
bit_vector:print(BV).

Documentation Changes

In OTP 21.3, the version when all functions and modules were introduced was added to the documentation.

Documentation Version OTP 21.3

Sverker used some git magic to figure out when functions and modules were added and automatically updated all the reference manuals. So now it should be a lot easier to see when some functionality was introduced. Knowing when an option to functions was added is still problematic, but we are trying to be better there as well.

In OTP 22 a new documentation top section called Internal Documentation has been added to the erts and compiler applications. The sections contain the internal documentation that previously only has been available on github so that it easier to access.

More Memory optimizations

Each major OTP release wouldn’t be complete without a set of memory allocator improvements and OTP 22 is no exception. The ones with the most potential to impact your applications are PR2046 and PR1854. Both of these optimizations should allow systems to better utilize memory carriers in high memory situations allowing your systems to handle more load.

Permalink

My Take on Property-Based Testing

…for Erlang & Elixir

A few months ago, Fred gave me a copy of his latest book (Property-Based Testing with PropEr, Erlang, and Elixir) so I could review it. So, here I am, returning the favor. But I’ll also use this chance to express some of my feelings and opinions about Property-Based Testing in general, since reading the book elicited quite a few of them. This will not be one of the usual articles on this blog, but I hope you’ll enjoy it anyway.

The Book

In a nutshell, this book is a very extensive and detailed manual/hands-on-tutorial with which you’ll first learn the general concepts behind Property-Based Testing (e.g. properties, generators, shrinking, etc.), then you’ll learn the basics of the methodology and tools and finally some of the more advanced techniques like custom generators, stateful properties and more.

Fred does a great job of walking you step by step and as you read the book, each chapter builds on the previous ones. But each chapter is just a tiny step that you can tackle easily. Suddenly, you reach the end of the book and you realize you learned a lot.

You should be aware of one thing, tho: It’s based on PropEr. You’ll find both Erlang and Elixir code samples, but all of them will use this framework. Same thing for the exercises. So, if you’re planning to practice what you read (which is something the book encourages you to do), you’ll be using it. Luckily, it’s an open-source framework so you can do that for free! 💪 That’s not to say that you can’t extrapolate what you learn to other frameworks, but it won’t be as easy.

My Personal Experience

I learned about Property-Based Testing at my University, almost 15 years ago. When I learned about it, the library that we used was QuickCheck (the Haskell version). At that time, I loved it. It seemed almost magical to me but I didn’t actually use it besides some examples and class assignments.

5 years later, I developed my thesis project also in Haskell. I had just learned TDD from Hernan Wilkinson and I thought what could be better than using Property-Based Testing for TDD? An amazing tool, combined with an incredibly effective technique…

My thesis project is still on GitHub, you can check its code and try to run its tests. I believe the last time I tried to run them it was still 2010… and they would probably be running still… 🤦‍♂

Fast-forward another 5 years and I was working with the Inakos. We’ve got ourselves a pretty fancy QuickCheck license and started trying to add stateful property tests to one of our biggest projects (which included an HTTP API built in Erlang with Cowboy). The result was a good set of properties that were a bit hard to read and understand and actually found… 0 bugs. 🤷‍♂

You can tell I learned a few lessons about PBT over the years…

My Opinions

First of all, I want to set one thing straight: I believe that the reasoning behind Property-Based Testing is sound. As Fred puts it…

[With Property-Based Testing] you’ll be able to write simple, short, and concise tests that automatically comb through your code the way only the most obsessive tester could.

Writing good properties and letting a good framework check them against your code will undoubtedly test your code much much better than any unit test you (or the most obsessive tester) can ever write. I don’t think anybody can deny that.

But…

What stuff should be tested this way?

Is it worth it to test all your code with properties? I don’t think so. That’s the basis of my thesis project problem: I tried to use QuickCheck to test everything (including the GUI). That was not a wise choice, for various reasons:

  • Property-Based tests are slower than unit tests. This is by-design since each property is tested a multitude of times instead of using just one example.
  • Writing good properties is not easy. Fred states that multiple times in his book. Sometimes, a good unit test is not hard to write even if it covers just one possible scenario, but writing a property that captures that same thing in such a general way that you can have 10000 different instances with which to test the system is much more complex.
  • Writing enough properties is not easy. In the same way that you can hardly ever be sure that you wrote enough unit tests to really cover all the functionality in the module/component/system you’re testing, it’s hard to be sure that you wrote enough properties for it. Fred does a great job of showing this situation in his book when he describes the different types of properties you can write, particularly when you’re not verifying your system behavior against a model. But you should also be wary of adding too many properties since each property takes time to run and you don’t want your suites to run forever…

So, what is worth testing with properties? I believe Fred summarized this idea quite well in this podcast when he said:

It comes down to figuring out what you want to write a test for first […] With Property-Based Tests this is really really hard because you need to find a general rule, but if you have something so stupid simple that the rule is the test itself you can not just do that.
[…]
You want to find something that is not trivial, but that you understand well and has significant complexity in its implementation. Usually data structures are interesting for that […].

From my perspective, there are things that are totally worth checking with properties and those are, in general, the ones used by Fred as examples in his book…

Data Structures

When you’re writing a module to manage a new complex data structure, like a new model for lists, a particular kind of trees, hash table, etc., writing unit tests will almost certainly miss a bunch of corner cases that property-based testing will not. Writing tests as properties will also help you define your module better and come up with a nicer API for it. Besides, as the book shows with list tests from OTP, property-based tests will use far less code, too.

Optimizations / Refactoring

One of the best scenarios you can find to use properties for testing is when you’re refactoring something that you can be sure that works correctly. Let’s say, you’re trying to optimize a certain function for performance or a module for readability, etc… Then you can use the previous version as a model and verify that your new implementation works exactly as the old one did. Writing properties will be at least as easy as writing unit tests (I think it will be easier since you don’t need to come up with the examples yourself) and it will give you much more confidence.

Libraries for Generic Processing

Much like data structures above, you can be writing a library like the CSV parser Fred presents in his book. Something where you don’t really know all of your use cases beforehand. Maybe you have a standard or RFC to guide you (That’s great, since encoding RFCs as properties is easier than coming up with general properties yourself) or some other properly written specification. Another thing that makes the use of properties easier is having complementary functionality (like encoding/decoding) so that you can write symmetric properties.

Complex Algorithms

This is, at least in my experience, the most common place to reap the benefits from writing properties: In most of your systems, at some point, you’ll have to write an algorithmic piece that sits at the core of your system’s logic. You will likely won’t get this for free from a library and it might involve multiple data structures and/or some complex pieces. In his book, Fred presents the checkout code kata as an example of this. When your algorithm has to respond correctly to some parameters that may vary widely with quite a few edge cases, writing properties instead of unit tests definitely pays off. I would personally still write a unit test for the shrunk value produced each time the properties found a bug in my system, but that’s just me.

Complex Stateful Systems

Then you have Stateful Properties, which is a great way to test your system as a whole when it has multiple API endpoints that can be executed in different sequences and conflict with each other. So, if you’re writing a system where, as stated in the book…

“what the code should do” — what the user perceives — is simple, but “how the code does it” — how it is implemented — is complex.

…then you’ll get a lot of benefits from writing Stateful Properties. But if your system is big but not complex (like your usual CRUD HTTP server) or if “what the code should do” is really hard to model, maybe not so much.

What about your Systems?

Now the question is: How often do you write systems that are worth testing with properties?

How often do you create new data structures? While I work with opaque data structures almost every day at work, they’re generally flat (i.e. a map or a record with multiple fields and their accessors). I don’t regularly have to create a new type of trees or hash tables.

How often do you work on large enough refactoring/optimization tasks? Large /complex enough to merit adding properties to compare the new version with the old one thoroughly? Some people (I remember Hernán, for instance), may do that all the time… Me? Not so much.

How often do you write new generic libraries? I actually used to do that a lot at Inaka and I believe many of those libraries would certainly benefit from property-based testing, indeed. On the other hand, in scenarios like the one described by the book, where you find yourself writing a system that needs to parse a CSV file… well, I don’t think I will ever face that requirement and go “OK. I have to write a generic CSV parser now.”. I will either try to find a library for that or write just the code to do exactly what my system needs (i.e. parse a 3-column/0-peculiarities CSV file in the case of the book).

How often do you write complex algorithms or whole stateful systems? This actually happens more often than any of the stuff in the previous paragraphs, it happens to me at least once per system, maybe more. Of course, this only includes relatively small pieces of the systems I build, but I do believe that using property-based testing for them would be a nice addition.

Final Notes

In conclusion, as I see it: Property-Based Testing is a great tool that you should seriously consider using for those places where its benefits over traditional example-based tests outweigh its drawbacks (basically time consumed writing and running the property tests). But, as with any other tools, it shouldn’t be the only tool at your disposal and therefore I would not advocate for Properties-Driven Development.

If/when you decide to start using Property-Based Testing, you should totally read Fred’s book. It will guide you through the process and it will make it smooth and enjoyable. It won’t make you an expert in writing properties, but it will get you as far as a book can take you.

Extra Credits

Finally, besides the main topic of the book and this article, I want to mention some other things related to the book:

  • I still hate Erlang macros. If I ever start using PropER for Erlang, I’ll certainly be the weirdo that writes proper:forall instead of ?FORALL. PropER macros for Elixir are much nicer, though.
  • Targeted Properties and Simulated Annealing are cool stuff to learn about, even if you won’t use Property-Based Testing that much. Don’t skim over that chapter in the book.
  • When working with opaque data structures, the book simplifies a bunch of concepts (for instance, it generates objects of a type by building maps which is wrong since PropER should not know how that type is represented inside the module). Don’t learn all your lessons about opaque data structures from the book. It’s not a book about that :)
  • Hidden in plain sight within the book you can find one of the best pieces of advice on how to structure your programs that I’ve ever read. There are several paragraphs and pictures about it but in a nutshell…
…side effects can be grouped together at one end of the system, and we can keep the rest of the code as pure as possible.

A reminder: Erlang Battleground is still looking for writers. If you want to join us, just get in touch with me (Brujo Benavides) and I’ll add you to our publication.

Spawnfest 2019 is coming! This year it will be on September 21st/22nd so start getting your team ready and brainstorming for ideas! Registration will open in a few months. And, of course, we’re always looking for sponsors!

In other news, ElixirConf is coming to South America! We’ll meet on October 24th & 25th at Medellín, Colombia with an amazing lineout of speakers, including Robert Virding, Verónica, Andrea Leopardi, Francesco Cesarini, Carlos Andres Bolaños and myself!! 😎

Find more info at https://www.elixirconf.la/ and start registering!


My Take on Property-Based Testing was originally published in Erlang Battleground on Medium, where people are continuing the conversation by highlighting and responding to this story.

Permalink

Copyright © 2016, Planet Erlang. No rights reserved.
Planet Erlang is maintained by Proctor.