A deep dive into Gleam's latest tooling, types, modules, and more | Raúl Chouza
Erlang/OTP 28.0-rc2 is the second release candidate of three before the OTP 28.0 release.
The intention with this release is to get feedback from our users. All feedback is welcome, even if it is only to say that it works for you. We encourage users to try it out and give us feedback either by creating an issue at https://github.com/erlang/otp/issues or by posting to Erlang Forums.
All artifacts for the release can be downloaded from the Erlang/OTP Github release and you can view the new documentation at https://erlang.org/documentation/doc-16-rc2/doc. You can also install the latest release using kerl like this:
kerl build 28.0-rc2 28.0-rc2.
Starting with this release, a source Software Bill of Materials (SBOM) will describe the release on the Github Releases page. We welcome feedback on the SBOM.
Erlang/OTP 28 is a new major release with new features, improvements as well as a few incompatibilities. Some of the new features are highlighted below.
Many thanks to all contributors!
Comprehensions have been extended with “zip generators” allowing
multiple generators to be run in parallel. For example,
[A+B || A <- [1,2,3] && B <- [4,5,6]]
will produce [5,7,9]
.
Generators in comprehensions can now be strict, meaning that if the generator pattern does not match, an exception will be raised instead of silently ignore the value that didn’t match.
It is now possible to use any base for floating point numbers as per EEP 75: Based Floating Point Literals.
For certain types of errors, the compiler can now suggest
corrections. For example, when attempting to use variable A
that
is not defined but A0
is, the compiler could emit the following
message: variable 'A' is unbound, did you mean 'A0'?
The size of an atom in the Erlang source code was limited to 255 bytes in previous releases, meaning that an atom containing only emojis could contain only 63 emojis. While atoms are still only allowed to contain 255 characters, the number of bytes is no longer limited.
The warn_deprecated_catch
option enables warnings for use of
old-style catch expressions on the form catch Expr
instead of the
modern try
… catch
… end
.
Provided that the map argument for a maps:put/3
call is known to
the compiler to be a map, the compiler will replace such calls with
the corresponding update using the map syntax.
Some BIFs with side-effects (such as binary_to_atom/1
) are
optimized in try
… catch
in the same way as guard BIFs in
order to gain performance.
The compiler’s alias analysis pass is now both faster and less conservative, allowing optimizations of records and binary construction to be applied in more cases.
The trace:system/3
function has been added. It has a similar
interface as erlang:system_monitor/2
but it also supports trace
sessions.
os:set_signal/2
now supports setting handlers for the SIGWINCH
,
SIGCONT
, and SIGINFO
signals.
The two new BIFs erlang:processes_iterator/0
and
erlang:process_next/1
make it possible to iterate over the
process table in a way that scales better than erlang:processes/0
.
The erl -noshell mode has been updated to have two sub modes called
raw
and cooked
, where cooked
is the old default behaviour and
raw
can be used to bypass the line-editing support of the native
terminal. Using raw
mode it is possible to read keystrokes as they
occur without the user having to press Enter. Also, the raw
mode
does not echo the typed characters to stdout.
The shell now prints a help message explaining how to interrupt a running command when stuck executing a command for longer than 5 seconds.
The join(Binaries, Separator)
function that joins a list of
binaries has been added to the
binary
module.
By default, sets created by module
sets
will now be represented as maps.
Module re
has
been updated to use the newer PCRE2 library instead of the
PCRE library.
There is a new zstd
module that does
Zstandard compression.
indent-region
in Emacs command will now handle multiline
strings better.For more details about new features and potential incompatibilities see the README.
The erlang.org webpage and download page will be temporary offline Friday March 14th until Monday March 16
Code and downloads will still be available from www.github.com/erlang
This is a re-publishing of a blog post I originally wrote for work, but wanted on my own blog as well.
AI is everywhere, and its impressive claims are leading to rapid adoption. At this stage, I’d qualify it as charismatic technology—something that under-delivers on what it promises, but promises so much that the industry still leverages it because we believe it will eventually deliver on these claims.
This is a known pattern. In this post, I’ll use the example of automation deployments to go over known patterns and risks in order to provide you with a list of questions to ask about potential AI solutions.
I’ll first cover a short list of base assumptions, and then borrow from scholars of cognitive systems engineering and resilience engineering to list said criteria. At the core of it is the idea that when we say we want humans in the loop, it really matters where in the loop they are.
The first thing I’m going to say is that we currently do not have Artificial General Intelligence (AGI). I don’t care whether we have it in 2 years or 40 years or never; if I’m looking to deploy a tool (or an agent) that is supposed to do stuff to my production environments, it has to be able to do it now. I am not looking to be impressed, I am looking to make my life and the system better.
Another mechanism I want you to keep in mind is something called the context gap. In a nutshell, any model or automation is constructed from a narrow definition of a controlled environment, which can expand as it gains autonomy, but remains limited. By comparison, people in a system start from a broad situation and narrow definitions down and add constraints to make problem-solving tractable. One side starts from a narrow context, and one starts from a wide one—so in practice, with humans and machines, you end up seeing a type of teamwork where one constantly updates the other:
The optimal solution of a model is not an optimal solution of a problem unless the model is a perfect representation of the problem, which it never is.
— Ackoff (1979, p. 97)
Because of that mindset, I will disregard all arguments of “it’s coming soon” and “it’s getting better real fast” and instead frame what current LLM solutions are shaped like: tools and automation. As it turns out, there are lots of studies about ergonomics, tool design, collaborative design, where semi-autonomous components fit into sociotechnical systems, and how they tend to fail.
Additionally, I’ll borrow from the framing used by people who study joint cognitive systems: rather than looking only at the abilities of what a single person or tool can do, we’re going to look at the overall performance of the joint system.
This is important because if you have a tool that is built to be operated like an autonomous agent, you can get weird results in your integration. You’re essentially building an interface for the wrong kind of component—like using a joystick to ride a bicycle.
This lens will assist us in establishing general criteria about where the problems will likely be without having to test for every single one and evaluate them on benchmarks against each other.
The following list of questions is meant to act as reminders—abstracting away all the theory from research papers you’d need to read—to let you think through some of the important stuff your teams should track, whether they are engineers using code generation, SREs using AIOps, or managers and execs making the call to adopt new tooling.
An interesting warning comes from studying how LLMs function as learning aides. The researchers found that people who trained using LLMs tended to fail tests more when the LLMs were taken away compared to people who never studied with them, except if the prompts were specifically (and successfully) designed to help people learn.
Likewise, it’s been known for decades that when automation handles standard challenges, the operators expected to take over when they reach their limits end up worse off and generally require more training to keep the overall system performant.
While people can feel like they’re getting better and more productive with tool assistance, it doesn’t necessarily follow that they are learning or improving. Over time, there’s a serious risk that your overall system’s performance will be limited to what the automation can do—because without proper design, people keeping the automation in check will gradually lose the skills they had developed prior.
Traditionally successful tools tend to work on the principle that they improve the physical or mental abilities of their operator: search tools let you go through more data than you could on your own and shift demands to external memory, a bicycle more effectively transmits force for locomotion, a blind spot alert on your car can extend your ability to pay attention to your surroundings, and so on.
Automation that augments users therefore tends to be easier to direct, and sort of extends the person’s abilities, rather than acting based on preset goals and framing. Automation that augments a machine tends to broaden the device’s scope and control by leveraging some known effects of their environment and successfully hiding them away. For software folks, an autoscaling controller is a good example of the latter.
Neither is fundamentally better nor worse than the other—but you should figure out what kind of automation you’re getting, because they fail differently. Augmenting the user implies that they can tackle a broader variety of challenges effectively. Augmenting the computers tends to mean that when the component reaches its limits, the challenges are worse for the operator.
If your job is to look at the tool go and then say whether it was doing a good or bad job (and maybe take over if it does a bad job), you’re going to have problems. It has long been known that people adapt to their tools, and automation can create complacency. Self-driving cars that generally self-drive themselves well but still require a monitor are not effectively monitored.
Instead, having AI that supports people or adds perspectives to the work an operator is already doing tends to yield better long-term results than patterns where the human learns to mostly delegate and focus elsewhere.
(As a side note, this is why I tend to dislike incident summarizers. Don’t make it so people stop trying to piece together what happened! Instead, I prefer seeing tools that look at your summaries to remind you of items you may have forgotten, or that look for linguistic cues that point to biases or reductive points of view.)
When evaluating a tool, you should ask questions about where the automation lands:
This is a bit of a hybrid between “Does it extend you?” and “Is it turning you into a monitor?” The five questions above let you figure that out.
As the tool becomes a source of assertions or constraints (rather than a source of information and options), the operator becomes someone who interacts with the world from inside the tool rather than someone who interacts with the world with the tool’s help. The tool stops being a tool and becomes a representation of the whole system, which means whatever limitations and internal constraints it has are then transmitted to your users.
People tend to do multiple tasks over many contexts. Some automated systems are built with alarms or alerts that require stealing someone’s focus, and unless they truly are the most critical thing their users could give attention to, they are going to be an annoyance that can lower the effectiveness of the overall system.
Tools tend to embody a given perspective. For example, AIOps tools that are built to find a root cause will likely carry the conceptual framework behind root causes in their design. More subtly, these perspectives are sometimes hidden in the type of data you get: if your AIOps agent can only see alerts, your telemetry data, and maybe your code, it will rarely be a source of suggestions on how to improve your workflows because that isn’t part of its world.
In roles that are inherently about pulling context from many disconnected sources, how on earth is automation going to make the right decisions? And moreover, who’s accountable for when it makes a poor decision on incomplete data? Surely not the buyer who installed it!
This is also one of the many ways in which automation can reinforce biases—not just based on what is in its training data, but also based on its own structure and what inputs were considered most important at design time. The tool can itself become a keyhole through which your conclusions are guided.
A common trope in incident response is heroes—the few people who know everything inside and out, and who end up being necessary bottlenecks to all emergencies. They can’t go away for vacation, they’re too busy to train others, they develop blind spots that nobody can fix, and they can’t be replaced. To avoid this, you have to maintain a continuous awareness of who knows what, and crosstrain each other to always have enough redundancy.
If you have a team of multiple engineers and you add AI to it, having it do all of the tasks of a specific kind means it becomes a de facto hero to your team. If that’s okay, be aware that any outages or dysfunction in the AI agent would likely have no practical workaround. You will essentially have offshored part of your ops.
What a thing promises to be is never what it is—otherwise AWS would be enough, and Kubernetes would be enough, and JIRA would be enough, and the software would work fine with no one needing to fix things.
That just doesn’t happen. Ever. Even if it’s really, really good, it’s gonna have outages and surprises, and it’ll mess up here and there, no matter what it is. We aren’t building an omnipotent computer god, we’re building imperfect software.
You’ll want to seriously consider whether the tradeoffs you’d make in terms of quality and cost are worth it, and this is going to be a case-by-case basis. Just be careful not to fix the problem by adding a human in the loop that acts as a monitor!
We don’t notice major parts of our own jobs because they feel natural. A classic pattern here is one of AIs getting better at diagnosing patients, except the benchmarks are usually run on a patient chart where most of the relevant observations have already been made by someone else. Similarly, we often see AI pass a test with flying colors while it still can’t be productive at the job the test represents.
People in general have adopted a model of cognition based on information processing that’s very similar to how computers work (get data in, think, output stuff, rinse and repeat), but for decades, there have been multiple disciplines that looked harder at situated work and cognition, moving past that model. Key patterns of cognition are not just in the mind, but are also embedded in the environment and in the interactions we have with each other.
Be wary of acquiring a solution that solves what you think the problem is rather than what it actually is. We routinely show we don’t accurately know the latter.
You probably know how straightforward it can be to write a toy project on your own, with full control of every refactor. You probably also know how this stops being true as your team grows.
As it stands today, a lot of AI agents are built within a snapshot of the current world: one or few AI tools added to teams that are mostly made up of people. By analogy, this would be like everyone selling you a computer assuming it were the first and only electronic device inside your household.
Problems arise when you go beyond these assumptions: maybe AI that writes code has to go through a code review process, but what if that code review is done by another unrelated AI agent? What happens when you get to operations and common mode failures impact components from various teams that all have agents empowered to go fix things to the best of their ability with the available data? Are they going to clash with people, or even with each other?
Humans also have that ability and tend to solve it via processes and procedures, explicit coordination, announcing what they’ll do before they do it, and calling upon each other when they need help. Will multiple agents require something equivalent, and if so, do you have it in place?
Some changes that cause issues might be safe to roll back, some not (maybe they include database migrations, maybe it is better to be down than corrupting data), and some may contain changes that rolling back wouldn’t fix (maybe the workload is controlled by one or more feature flags).
Knowing what to do in these situations can sometimes be understood from code or release notes, but some situations can require different workflows involving broader parts of the organization. A risk of automation without context is that if you have situations where waiting or doing little is the best option, then you’ll need to either have automation that requires input to act, or a set of actions to quickly disable multiple types of automation as fast as possible.
Many of these may exist at the same time, and it becomes the operators’ jobs to not only maintain their own context, but also maintain a mental model of the context each of these pieces of automation has access to.
The fancier your agents, the fancier your operators’ understanding and abilities must be to properly orchestrate them. The more surprising your landscape is, the harder it can become to manage with semi-autonomous elements roaming around.
One way to track accountability in a system is to figure out who ends up having to learn lessons and change how things are done. It’s not always the same people or teams, and generally, learning will happen whether you want it or not.
This is more of a rhetorical question right now, because I expect that in most cases, when things go wrong, whoever is expected to monitor the AI tool is going to have to steer it in a better direction and fix it (if they can); if it can’t be fixed, then the expectation will be that the automation, as a tool, will be used more judiciously in the future.
In a nutshell, if the expectation is that your engineers are going to be doing the learning and tweaking, your AI isn’t an independent agent—it’s a tool that cosplays as an independent agent.
All in all, none of the above questions flat out say you should not use AI, nor where exactly in the loop you should put people. The key point is that you should ask that question and be aware that just adding whatever to your system is not going to substitute workers away. It will, instead, transform work and create new patterns and weaknesses.
Some of these patterns are known and well-studied. We don’t have to go rushing to rediscover them all through failures as if we were the first to ever automate something. If AI ever gets so good and so smart that it’s better than all your engineers, it won’t make a difference whether you adopt it only once it’s good. In the meanwhile, these things do matter and have real impacts, so please design your systems responsibly.
If you’re interested to know more about the theoretical elements underpinning this post, the following references—on top of whatever was already linked in the text—might be of interest:
Erlang/OTP 27.3 is the third maintenance patch package for OTP 27, with mostly bug fixes as well as improvements.
For details about bugfixes and potential incompatibilities see the Erlang 27.3 README
The Erlang/OTP source can also be found at GitHub on the official Erlang repository, https://github.com/erlang/otp
Download links for this and previous versions are found here:
We are pleased to share that the Elixir project now complies with OpenChain (ISO/IEC 5230), an international standard for open source license compliance. This step aligns with broader efforts to meet industry standards for supply chain and cybersecurity best practices.
“Today’s announcement around Elixir’s conformance represents another significant example of community maturity,” says Shane Coughlan, OpenChain General Manager. “With projects - the final upstream - using ISO standards for compliance and security with increasing frequency, we are seeing a shift to longer-term improvements to trust in the supply chain.”
By following OpenChain (ISO/IEC 5230), we demonstrate clear processes around license compliance. This benefits commercial and community users alike, making Elixir easier to adopt and integrate with confidence.
Elixir has an automated release process where its artifacts are signed. This change strengthens this process by:
These additions offer greater transparency into the components and licenses of each release, supporting more rigorous supply chain requirements.
Contributing to Elixir remains largely the same, we have added more clarity and guidelines around it:
Contributors will notice minimal procedural changes, as standard practices around licensing remain in place.
For more details, see the CONTRIBUTING guidelines.
These updates were made in collaboration with the Erlang Ecosystem Foundation, reflecting a shared commitment to robust compliance and secure development practices. Thank you to everyone who supported this milestone. We appreciate the community’s ongoing contributions and look forward to continuing the growth of Elixir under these established guidelines.
Welcome to our series of case studies about companies using Elixir in production.
Remote is the everywhere employment platform enabling companies to find, hire, manage, and pay people anywhere across the world.
Founded in 2019, they reached unicorn status in just over two years and have continued their rapid growth trajectory since.
Since day zero, Elixir has been their primary technology. Currently, their engineering organization as a whole consists of nearly 300 individuals.
This case study focuses on their experience using Elixir in a high-growth environment.
Marcelo Lebre, co-founder and president of Remote, had worked with many languages and frameworks throughout his career, often encountering the same trade-off: easy-to-code versus easy-to-scale.
In 2015, while searching for alternatives, he discovered Elixir. Intrigued, Marcelo decided to give it a try and immediately saw its potential. At the time, Elixir was still in its early days, but he noticed how fast the community was growing, with support for packages and frameworks starting to show up aggressively.
In December 2018, when Marcelo and his co-founder decided to start the company, they had to make a decision about the technology that would support their vision. Marcelo wanted to prioritize building a great product quickly without worrying about scalability issues from the start. He found Elixir to be the perfect match:
I wanted to focus on building a great product fast and not really worry about its scalability. Elixir was the perfect match—reliable performance, easy-to-read syntax, strong community, and a learning curve that made it accessible to new hires.
- Marcelo Lebre, Co-founder and President
The biggest trade-off Marcelo identified was the smaller pool of Elixir developers compared to languages like Ruby or Python. However, he quickly realized that the quality of candidates more than made up for it:
The signal-to-noise ratio in the quality of Elixir candidates was much higher, which made the trade-off worthwhile.
- Marcelo Lebre, Co-founder and President
Remote operates primarily with a monolith, with Elixir in the backend and React in the front-end.
The monolith enabled speed and simplicity, allowing the team to iterate quickly and focus on building features. However, as the company grew, they needed to invest in tools and practices to manage almost 180 engineers working in the same codebase.
One practice was boundary enforcement. They used the Boundary library to maintain strict boundaries between modules and domains inside the codebase.
Another key investment was optimizing their compilation time in the CI pipeline. Since their project has around 15,000 files, compiling it in every build would take too long. So, they implemented incremental builds in their CI pipeline, recompiling only the files affected by changes instead of the entire codebase.
I feel confident making significant changes in the codebase. The combination of using a functional language and our robust test suite allows us to keep moving forward without too much worry.
- André Albuquerque, Staff Engineer
Additionally, as their codebase grew, the Elixir language continued to evolve, introducing better abstractions for developers working with large codebases. For example, with the release of Elixir v1.11, the introduction of config/runtime.exs provided the Remote team with a better foundation for managing configuration. This enabled them to move many configurations from compile-time to runtime, significantly reducing unnecessary recompilations caused by configuration updates.
One might expect Remote’s infrastructure to be highly complex, given their global scale and the size of their engineering team. Surprisingly, their setup remains relatively simple, reflecting a thoughtful balance between scalability and operational efficiency.
Remote runs on AWS, using EKS (Elastic Kubernetes Service). The main backend (the monolith) operates in only five pods, each with 10 GB of memory. They use Distributed Erlang to connect the nodes in their cluster, enabling seamless communication between processes running on different pods.
For job processing, they rely on Oban, which runs alongside the monolith in the same pods.
Remote also offers a public API for partners. While this API server runs separately from the monolith, it is the same application, configured to start a different part of its supervision tree. The separation was deliberate, as the team anticipated different load patterns for the API and wanted the flexibility to scale it independently.
The database setup includes a primary PostgreSQL instance on AWS RDS, complemented by a read-replica for enhanced performance and scalability. Additionally, a separate Aurora PostgreSQL instance is dedicated to storing Oban jobs. Over time, the team has leveraged tools like PG Analyze to optimize performance, addressing bottlenecks such as long queries and missing indexes.
This streamlined setup has proven resilient, even during unexpected spikes in workload. The team shared an episode where a worker’s job count unexpectedly grew by two orders of magnitude. Remarkably, the system handled the increase seamlessly, continuing to run as usual without requiring any design changes or manual intervention.
We once noticed two weeks later that a worker’s load had skyrocketed. But the scheduler worked fine, and everything kept running smoothly. That was fun.
- Alex Naser, Staff Engineer
Around 90% of their backend team works in the monolith, while the rest work in a few satellite services, also written in Elixir.
Within the monolith, teams are organized around domains such as onboarding, payroll, and billing. Each team owns one or multiple domains.
To streamline accountability in a huge monolith architecture, Remote invested heavily in team assignment mechanisms.
They implemented a tagging system that assigns ownership down to the function level. This means any trace—whether sent to tools like Sentry or Datadog—carries a tag identifying the responsible team. This tagging also extends to endpoints, allowing teams to monitor their areas effectively and even set up dashboards for alerts, such as query times specific to their domain.
The tagging system also simplifies CI workflows. When a test breaks, it’s automatically linked to the responsible team based on the Git commit. This ensures fast issue identification and resolution, removing the need for manual triaging.
Remote’s hiring approach prioritizes senior engineers, regardless of their experience with Elixir.
During the hiring process, all candidates are required to complete a coding exercise in Elixir. For those unfamiliar with the language, a tailored version of the exercise is provided, designed to introduce them to Elixir while reflecting the challenges they would face if hired.
Once hired, new engineers are assigned an engineering buddy to guide them through the onboarding process.
For hires without prior Elixir experience, Remote developed an internal Elixir training camp, a curated collection of best practices, tutorials, and other resources to introduce new hires to the language and ecosystem. This training typically spans two to four weeks.
After completing the training, engineers are assigned their first tasks—carefully selected tickets designed to build confidence and familiarity with the codebase.
Remote’s journey highlights how thoughtful technology, infrastructure, and team organization decisions can support rapid growth.
By leveraging Elixir’s strengths, they built a monolithic architecture that balanced simplicity with scalability. This approach allowed their engineers to iterate quickly in the early stages while effectively managing the complexities of a growing codebase.
Investments in tools like the Boundary library and incremental builds ensured their monolith remained efficient and maintainable even as the team and codebase scaled dramatically.
Remote’s relatively simple infrastructure demonstrates that scaling doesn’t always require complexity. Their ability to easily handle unexpected workload spikes reflects the robustness of their architecture and operational practices.
Finally, their focus on team accountability and streamlined onboarding allowed them to maintain high productivity while integrating engineers from diverse technical backgrounds, regardless of their prior experience with Elixir.
Elixir v1.18 is an impressive release with improvements across the two main efforts happening within the Elixir ecosystem right now: set-theoretic types and language servers. It also comes with built-in JSON support and adds new capabilities to its unit testing library. Let’s go over each of those in detail.
There are several updates in the typing department, so let’s break them down.
There is an on-going research and development effort to bring static types to Elixir. Elixir’s type system is:
sound - the types inferred and assigned by the type system align with the behaviour of the program
gradual - Elixir’s type system includes the dynamic()
type, which can be used when the type of a variable or expression is checked at runtime. In the absence of dynamic()
, Elixir’s type system behaves as a static one
developer friendly - the types are described, implemented, and composed using basic set operations: unions, intersections, and negation (hence it is a set-theoretic type system)
More interestingly, you can compose dynamic()
with any type. For example, dynamic(integer() or float())
means the type is either integer()
or float()
at runtime. This allows the type system to emit warnings if none of the types are satisfied, even in the presence of dynamism.
Elixir v1.17 was the first release to incorporate the type system in the compiler. In particular, we have added support for primitive types (integer, float, binary, pids, references, ports), atoms, and maps. We also added type checking to a handful of operations related to those types, such as accessing fields in maps, as in user.adress
(mind the typo), performing structural comparisons between structs, as in my_date < ~D[2010-04-17]
, etc.
The most exciting change in Elixir v1.18 is type checking of function calls, alongside gradual inference of patterns and return types. To understand how this will impact your programs, consider the following code defined in lib/user.ex
:
defmodule User do
defstruct [:age, :car_choice]
def drive(%User{age: age, car_choice: car}, car_choices) when age >= 18 do
if car in car_choices do
{:ok, car}
else
{:error, :no_choice}
end
end
def drive(%User{}, _car_choices) do
{:error, :not_allowed}
end
end
Elixir’s type system will infer the drive
function expects a User
struct as input and returns either {:ok, dynamic()}
or {:error, :no_choice}
or {:error, :not_allowed}
. Therefore, the following code
User.drive({:ok, %User{}}, car_choices)
will emit a warning stating that we are passing an invalid argument:
Now consider the expression below. We are expecting the User.drive/2
call to return :error
, which cannot possibly be true:
case User.drive(user, car_choices) do
{:ok, car} -> car
:error -> Logger.error("User cannot drive")
end
Therefore the code above would emit the following warning:
Our goal is for the warnings to provide enough contextual information that lead to clear reports and that’s an area we are actively looking for feedback. If you receive a warning that is unclear, please open up a bug report.
Elixir v1.18 also augments the type system with support for tuples and lists, plus type checking of almost all Elixir language constructs, except for
-comprehensions, with
, and closures. Here is a non-exaustive list of the new violations that can be detected by the type system:
if you define a pattern that will never match any argument, such as def function(x = y, x = :foo, y = :bar)
matching or accessing tuples at an invalid index, such as elem(two_element_tuple, 2)
if you have a branch in a try
that will never match the given expression
if you have a branch in a cond
that always passes (except the last one) or always fails
if you attempt to use the return value of a call to raise/2
(which by definition returns no value)
In summary, this release takes us further in our journey of providing type checking and type inference of existing Elixir programs, without requiring Elixir developers to explicitly add type annotations.
For existing codebases with reasonable code coverage, most type system reports will come from uncovering dead code - code which won’t ever be executed - as seen in a few distinct projects. A notable example is the type system ability to track how private functions are used throughout a module and then point out which clauses are unused:
defmodule Example do
def public(x) do
private(Integer.parse(x))
end
defp private(nil), do: nil
defp private("foo"), do: "foo"
defp private({int, _rest}), do: int
defp private(:error), do: 0
defp private("bar"), do: "bar"
end
Keep in mind the current implementation does not perform type inference of guards yet, which is an important source of typing information in programs. There is a lot the type system can learn about our codebases, that it does not yet. This brings us to the next topic.
The next Elixir release should improve the typing of maps, tuples, and closures, allowing us to type even more constructs. We also plan to fully type the with
construct, for
-comprehensions, as well as protocols.
But more importantly, we want to focus on complete type inference of guards, which in turn will allow us to explore ideas such as redundant pattern matching clauses and exhaustiveness checks. Our goal with inference is to strike the right balance between developer experience, compilation times, and the ability of finding provable errors in existing codebases. You can learn more about the trade-offs we made for inference in our documentation.
Future Elixir versions will introduce user-supplied type signatures, which should bring the benefits of a static type system without relying on inference. Check our previous article on the overall milestones for more information.
The type system was made possible thanks to a partnership between CNRS and Remote. The development work is currently sponsored by Fresha (they are hiring!), Starfish*, and Dashbit.
Three months ago, we welcomed the Official Language Server team, with the goal of unifying the efforts behind code intelligence, tools, and editors in Elixir. Elixir v1.18 brings new features on this front by introducing locks and listeners to its compilation. Let’s understand what it means.
At the moment, all language server implementations have their own compilation environment. This means that your project and dependencies during development are compiled once, for your own use, and then again for the language server. This duplicate effort could cause the language server experience to lag, when it could be relying on the already compiled artifacts of your project.
This release addresses the issue by introducing a compiler lock, ensuring that only a single operating system running Elixir compiles your project at a given moment, and by providing the ability for one operating system process to listen to the compilation results of others. In other words, different Elixir instances can now communicate over the same compilation build, instead of racing each other.
These enhancements do not only improve editor tooling, but they also directly benefit projects like IEx and Phoenix. Here is a quick snippet showing how to enable auto-reloading inside IEx, then running mix compile
in one shell automatically reloads the module inside the IEx session:
Erlang/OTP 27 added built-in support for JSON and we are now bringing it to Elixir. A new module, called JSON
, has been added with functions to encode and decode JSON. Its most basic APIs reflect the ones from the Jason project (the de-facto JSON library in the Elixir community up to this point).
A new protocol, called JSON.Encoder
, is also provided for those who want to customize how their own data types are encoded to JSON. You can also derive protocols for structs, with a single-line of code:
@derive {JSON.Encoder, only: [:id, :name]}
defstruct [:id, :name, :email]
The deriving API mirrors the one from Jason
, helping those who want to migrate to the new JSON
module.
ExUnit now supports parameterized tests. This allows your test modules to run multiple times under different parameters.
For example, Elixir ships a local, decentralized and scalable key-value process storage called Registry
. The registry can be partitioned and its implementation differs depending if partitioning is enabled or not. Therefore, during tests, we want to ensure both modes are exercised. With Elixir v1.18, we can achieve this by writing:
defmodule Registry.Test do
use ExUnit.Case,
async: true,
parameterize: [
%{partitions: 1},
%{partitions: 8}
]
# ... the actual tests ...
end
Once specified, the number of partitions is available as part of the test configuration. For example, to start one registry per test with the correct number of partitions, you can write:
setup config do
partitions = config.partitions
name = :"#{config.test}_#{partitions}"
opts = [keys: :unique, name: name, partitions: partitions]
start_supervised!({Registry, opts})
opts
end
Prior to parameterized tests, Elixir resorted on code generation, which increased compilation times. Furthermore, ExUnit parameterizes the whole test modules, which also allows the different parameters to run concurrently if the async: true
option is given. Overall, this features allows you to compile and run multiple scenarios more efficiently.
Finally, ExUnit also comes with the ability of specifying test groups. While ExUnit supports running tests concurrently, those tests must not have shared state between them. However, in large applications, it may be common for some tests to depend on some shared state, and other tests to depend on a completely separate state. For example, part of your tests may depend on Cassandra, while others depend on Redis. Prior to Elixir v1.18, these tests could not run concurrently, but in v1.18 they might as long as they are assigned to different groups:
defmodule MyApp.PGTest do
use ExUnit.Case, async: true, group: :pg
# ...
end
Tests modules within the same group do not run concurrently, but across groups, they might.
With features like async tests, suite partitioning, and now grouping, Elixir developers have plenty of flexibility to make the most use of their machine resources, both in development and in CI.
mix format --migrate
The mix format
command now supports an explicit --migrate
flag, which will convert constructs that have been deprecated in Elixir to their latest version. Because this flag rewrites the AST, it is not guaranteed the migrated format will always be valid when used in combination with macros that also perform AST rewriting.
As of this release, the following migrations are executed:
Normalize parens in bitstring modifiers - it removes unnecessary parentheses in known bitstring modifiers, for example <<foo::binary()>>
becomes <<foo::binary>>
, or adds parentheses for custom modifiers, where <<foo::custom_type>>
becomes <<foo::custom_type()>>
.
Charlists as sigils - formats charlists as ~c
sigils, for example 'foo'
becomes ~c"foo"
.
unless
as negated if
s - rewrites unless
expressions using if
with a negated condition, for example unless foo do
becomes if !foo do
. We plan to deprecate unless
in future releases.
More migrations will be added in future releases to help us push towards more consistent codebases.
Other notable changes include PartitionSupervisor.resize!/2
, for resizing the number of partitions (aka processes) of a supervisor at runtime, Registry.lock/3 for simple in-process key locks, PowerShell versions of elixir
and elixirc
scripts for better DX on Windows, and more. See the CHANGELOG for the complete release notes.
Happy coding!