Erlang Highlights 2019 - Best Of The Beam.

Despite being over 30 years old (and open source for 21), Erlang continues to be evolving, finding new industries to impact, fresh use cases and exciting stories. In February 2019, the Erlang Ecosystem Foundation was announced at Code BEAM San Francisco. This group brings together a diverse community of BEAM users, including corporate and commercial interests, in the Erlang and Elixir Ecosystem. It encourages the continued development of technologies and open source projects based on/around the BEAM, its runtime and languages. This is an exciting step to ensure the ongoing success of the technologies we specialise in as their hashtag so aptly puts it, #weBEAMtogether.

Erlang’s own enigmatic standing in the developer community was best summed up by StackOverflow’s annual developer survey. This year Erlang was featured as one of the top 10 paid languages for developers. It also featured in all three of the most loved, dreaded and wanted technologies.

Throughout 2019, there have been many fantastic articles, guides, podcasts and talks given by members of the community showing off the capabilities of the technology. If you’re looking for inspiration, or want to see why a language that is over 30 years old is still provides some of the best paid jobs, check out these fantastic stories.

Top Erlang Resources 2019

TalkConcurrency

We were privileged to host a panel of industry legends including Sir Tony Hoare, Carl Hewitt and the late Joe Armstrong. What followed was an open discussion about the need for concurrency and how it is likely to evolve in the future. Carl Hewitt is the designer of logic programming language Planner, he is known for his work on evolving the actor model. Sir Tony Hoare developed the sorting algorithm Quicksort. He has been highly decorated for his work within Computer Science including six Honourary Doctorates and a Knighthood in the year 2000. Most in our community will be familiar with Joe Armstrong as being one of the inventors of Erlang, and someone whose work was highly influential to the field of concurrency. Each of our three guests are highly celebrated for their work and approaches to concurrency and impact it in their own way, whilst using different technologies. The wisdom that these three legends hold is clearly on show during the discussion. It is a truly must-watch talk for anyone with a passing interest in Erlang, Elixir and the BEAM.

How to introduce dialyser to a large project

Dialyser is a fantastic tool to identify discrepancies and errors in Erlang code. Applying dialyser to a considerable codebase can lead to performance issues, particularly when you are working with a large codebase that has never been analysed with dialyser before. In this blog, Brujo Benavides demonstrates how the team at NextRoll were able to reduce discrepancies in the code by a third, in just a week while also setting up the system to be able to include dialyser in the ongoing development. Read the blog here.

Five 9’s for five years at the UK’s National Health Service

Martin Sumner joined the Elixir Talk podcast for a fantastic discussion of the work they’re doing at the NHS. Their centralised exchange point handles over 65 million record requests a day. Availability is vital due to the nature of medical information. Using Riak, they have managed to maintain 99.999% availability for over five years, an impressive effort. Listen to the podcast here.

Saša Jurić shared the Soul of Erlang

One of the most shared and talked about conferences videos of 2019 was Sasa Juric’s ‘the soul of Erlang’ at GoTo Chicago 2019, and with good reason. It is an articulate, passionate summary of what makes Erlang so unique, and why it can achieve things that are so difficult in other technologies. Watch the video here.

Who is using Erlang & why?

When we launched our blog on the companies using Erlang and why we had no idea just how much it would resonate with the community. To date, there have been over 25,000 visits to the page. It was the top story on HackerNews and continues to generate a high volume of visits four months after its initial release. The reception to this blog shows the ongoing interest in the language, and the appetite for people sharing in-production examples of Erlang at work. Read the blog here.

BEAM extreme

AdRoll deals with an average of half-a-million real-time bid requests per second, with spikes substantially higher than that. Each big spike has a significant financial implication. As a result, they’ve had to develop a set of tricks to give their system a little performance boost. In this talk at ElixirConf, Miriam Pena demonstrates some of the tactics she’s seen and made to provide the BEAM with an extra edge when it comes to speed or memory. Watch the talk here.

Ten years of Erlang

Fred Hebert is an experienced, passionate and respected member of the Erlang community. His conference talks, books and webinars are all extremely valuable resources. This year, he celebrated ten years as part of the community and took time to reflect on Erlang’s past, its growth, and where it may go in the future. The blog is a fantastic read, and we recommend it for anyone who is passionate about the BEAM. Read the blog here.

Testable, high performance, large scale distributed Erlang

Christopher Meiklejohn presents the design of an alternative runtime system for improved scalability and reduced latency in distributed actor applications using Partisan, which is built in Erlang. Watch the talk here.

Erlang for Blockchain

As blockchain continues to increase the number of in-production uses, such as Walmart’s use of smart contracts in their logistics supply chain, Erlang has increasingly become the language of choice for blockchain providers. ArcBlock joined the Erlang Ecosystem Foundation as a founding sponsor, and also joined us for guest blogs and a webinar. Aeternity is another big advocate for the use of Erlang in blockchain development. You can read about their experience using the BEAM for blockchain here.

Solving embarrassingly obvious problems in Erlang

Often, when people complain about the syntax of Erlang, they are making simple errors that can be fixed with a change of mindset. In this blog, Garret Smith shows how to make simple shifts to eliminate these errors and, in the process, become a better programmer. Read his solutions here.

Whatsapp user migration

Whatsapp continues to be one of the most famous examples of Erlang development. This year, they spoke to the crowd at Code BEAM SF about how they migrated their 1.5 billion users to the Facebook infrastructure. Watch their talk here. And, for those interested, Whatsapp are currently growing their London team.

Summary

2019 showed that there is still a demand for Erlang and the reliability and fault-tolerance it delivers. 2020 is already looking like an exciting year. The growth of FinTech, digital banking and blockchain all provide exciting avenues for expansion for the language. The newly developed Erlang Ecosystem Foundation has working groups dedicated to developing libraries and tools to make the Erlang Ecosystem even easier to use to help grow the community. And, for the first time, the BEAM will have a dedicated room at FOSDEM, which is sure to introduce more developers to the language. If you’d like to catch all of our news, guides and webinars in 2020 and beyond, join our mailing list.

You may also like

Best of Elixir 2019 Best of RabbitMQ 2019 - Coming soon Best of FinTech 2019 - Coming soon

Permalink

Elixir Highlights 2019 - Best Of The BEAM.

Elixir is a relatively young, but rapidly maturing technology. This year, new organisations discussing the use of Elixir in their tech stack (such as retail giants PepsiCo) showed the continued adoption of the language from a commercial perspective. Meanwhile, the emergence of Phoenix LiveView showed growth in the technologies capabilities. LiveView offers the promise of web applications without JavaScript for Elixir developers.

Stack Overflow Developers Survey 2019 reflected the ongoing upward trajectory of the language, with Elixir featuring as one of the most popular languages, it was also one of the top 10 most loved languages, and listed as one of the best paid.

2019 felt like a turning point for closer collaboration between the Erlang and Elixir communities, with both languages benefitting from the development of The Erlang Ecosystem Foundation, and the Telemetry library.

From Machine Learning to IoT and connected devices, there were also a vast array of impressive examples of Elixir being used in some of the biggest growth sectors in technology. As these areas continue to grow and shape development, it is both exciting and important that we develop and share in-production examples of how the fault-tolerance, reliability and developer community of Elixir can provide ideal solutions.

With so many great conference talks, webinars, guides and podcasts shared throughout the year, it is understandable that you may have missed some. To help inspire you and spread the stories worth sharing, we’ve curated our Best Of The BEAM 2019 list for Elixir.

Top Elixir Resources 2019

Phoenix LiveView Release

The release of Phoenix LiveView was a game-changer in late 2018, and it continued to be a largely influential story this year. Chris McCord showed off how Elixir Developers can build real-time, interactive web apps without JavaScript using Phoenix LiveView at ElixirConf EU. He also joined the team at Elixir Talk podcast twice over the course of the year in episode 140 and episode 157 to discuss the library.

Tetris in Phoenix LiveView

One fantastically fun example of Phoenix LiveView in action is this version of Tetris. It’s fully playable and looks great. We’ve got Sandesh who developed this game as our first webinar guest in 2020. He’ll be doing a live coding demonstration of a new and improved version of the game. You can register for the webinar here.

Who is using Elixir

From Pinterest to PepsiCo, we took a look at some of the biggest names using Elixir in their tech stack. The article offers a high-level explanation of who is using Elixir, what features they are using and the results they are getting. With companies in travel, online gaming and FinTech, it is clear to see the broad scope of companies that can benefit from Elixir. Read the blog here.

Telemetry

The Telemetry library is an open source interface for telemetry data. It aims to unify and standardise how libraries and applications on the BEAM are instrumented and monitored. The project was developed by Arkadiusz Gil, representing Erlang Solutions, in conjunction with the Elixir core team and is continually being evolved by an active and passionate community. You can find Telemetry on GitHub and read more about it on our blog announcing its release.

Machine learning

Machine learning can help businesses obtain a substantial competitive advantage by reducing operating costs, building business insights and improving conversion rates. In an eCommerce environment, machine learning can lead to a significant improvement in ROI. Our team used their experience building a machine learning platform to develop this helpful guide. You can learn how Elixir can be used for machine learning at our blog.

Elixir makes you a better programmer

As we mentioned in the introduction, Elixir was amongst the best-paid technologies for developers in 2019, according to StackOverflow’s developer survey. Learning Erlang and Elixir is valuable for many reasons, one of which is that it will give you a fresh perspective that improves the way you solve problems in other technologies. It is an important message to share to help us spread the language to more members of the broader developer community. This article by Alec Brunelle is an excellent example of how learning Elixir can make you a better developer in any language.

Elixir for IoT

IoT offers a deluge of opportunity to developers, and Elixir’s concurrency makes it a great choice for IoT projects. In just his second Elixir project, James Every was able to connect over 100,000 IoT devices to a single server. Watch his talk from Code Elixir to see the power of Elixir for connected devices.

Tracing in Elixir

Many Elixir developers are still missing out on the benefits of tracing. Gabor Olah, one of the expert developers in our team, explained the options available for tracing in Elixir and debunked common myths in his guide that is sure to help you master the art of tracing. Read the article and reap the benefits.

Broadway by Platformatec

Broadway is a tool that will help developers build concurrent, multi-stage data processing pipelines by allowing them to consume data efficiently from different sources, including Amazon SQS and RabbitMQ. Using Broadway, you can cut down the time to build pipelines while also avoiding many of the common problems that can arise when doing so. Watch Marlus Saraiva, from the Plataformatec R&D team, discuss Broadway at ElixirConf this year here.

Live coding webinars with Bruce Tate

Bruce Tate joined us for a live coding webinar in May 2019 to show off how to get the most out of OTP. Over 240 Elixir programmers joined us for an incredibly popular demonstration, and the feedback was so overwhelming that he joined us again in September to do a live coding demonstration of how to use a GenStateMachine in Elixir.

Concurrency for Elixir

As the popularity of Elixir has grown, so has the community of developers who have transitioned from object-orientated programming. Many people are aware that Elixir offers a world-class concurrency model, without understanding how it achieves these results. This article from Sophie DeBenedetto does a fantastic job of explaining concurrency with the incredibly relatable example of laundry. You can read the article here.

Parsing from first principle

Saša Jurić’s live coding demonstration at WebCamp 2019 was incredibly well-received, and with good reason. The talk shows a methodology for parsing simply and efficiently and will be particularly helpful for experienced programmers who are not familiar with the concept of parsing. Watch it here.

Summary

2019 was another incredibly exciting year for Elixir, with more large scale commercial companies showing off in-production examples of its use in their tech stack. We expect this to continue into 2020 with the Elixir core team and The Erlang Ecosystem Foundation, both developing working groups to create libraries and tools which make it even easier to use. The newly established BEAM room at FOSDEM is also likely to create a fresh batch of developers from other languages with interest in Elixir. As with Erlang, the growth of FinTech, digital banking, blockchain, IoT and machine learning all provide exciting possibilities for the language to expand and deliver excellent commercial results. If you’d like to stay up-to-date with all of our news, guides and webinars next year, please join our mailing list.

You may also like:

Best of Erlang 2019

Best of RabbitMQ 2019 - Coming soon

Best of FinTech 2019 - Coming soon

Permalink

Fintech 2.0 Incumbents vs Challengers - Banking’s Battle Royale

The fundamental building blocks of finance and financial services (FS) are transforming driven by emerging technologies and changing societal, regulatory, industrial, commercial and economic demands. Fintech or financial technology is changing the FS industry via infrastructure-based technology and open APIs. The venue for last week’s World Fintech Forum was filled with fintechs from across the ecosystem and representatives from big banks. The agenda promised two days of talks that would cover the spectrum of discussion points that are occupying the industry and this blog post will look at the Top 10 Takeaways for those unable to attend the event.

Key takeaways

  1. Great user experience is vital
  2. There’s a lot of regional variation in fintech disruption
  3. More challengers and incumbents are partnering
  4. Big Tech’s strength is in data
  5. Open Banking: regulations proving a boost to fintechs
  6. Blockchain: A dramatic increase in levels of investment
  7. For Cryptocurrencies - the jury’s still out
  8. AI is key
  9. Talent recruitment is one of the biggest challenges
  10. RegTech enables conforming with the changing regulatory landscape

1. The customer really is king

Consumers and business customers alike have embraced the idea of on-demand finance, thanks to mobile and cloud computing. Fintech trends show that people are more comfortable managing their money and business online, and they’re less willing to put up with the comparatively slower and less flexible processes of traditional financial services.

Present on day one of the conference was Laurence Krieger, COO of SME lender Tide, who views customer experience as the key advantage held by challengers. At Tide, they spotted the gap in SME banking where incumbents were simply offering the same type of products as those offered to retail customers but just repackaged as being for SMEs. Meanwhile, Filip Surowiak from ViaCash looked at how they are bridging the gap between cash and digital cash solutions by addressing the movement away from branch and ATM use to a more convenient model. It is these customer-centric approaches which will both win and retain business for fintechs and incumbents alike.

2. The West is moving at a different pace to Asia

While there are over 1,600 UK fintech companies, a figure set to double by 2030 according to the UK Government’s Tech Nation report, it’s emerging economies which lead the global charge in the sector. In the UK there is a 71 percent adoption of fintech while it’s 87 percent in both China and India.

Chinese fintech ecosystems have scaled and innovated faster than their counterparts in the West. In Asia there are singular platforms or super apps which combine FS entertainment and lifestyle products, as yet the equivalent does not exist elsewhere.

3. Partnerships are the favoured way to go for incumbents and fintechs

Despite recent announcements such as of HSBC’s Kinetic platform and Bo from RBS, for banks to build their own digital solutions takes significant investment and resources with no guarantees of success. It also takes significant capital to acquire a successful competing fintech. Strategic investments and partnerships was the approach most favoured by the incumbents present at Fintech Forum.

According to McKinsey’s report, Synergy and Disruption: Ten trends shaping fintech, 80 percent of financial institutions have entered fintech partnerships. Meanwhile, 82 percent expect to increase fintech partnerships in the next three to five years, according to a Department for International Trade report entitled UK FinTech State of the Nation.

At the conference BNY Mellon, RBS, Citi Ventures and others were all present and positioning their organisation as being a willing and accessible partner for budding fintech startups. Luis Valdich, MD at Citi Ventures, spoke of their success stories with Pindrop Security and Trulioo and warned us all to avoid being pushed towards a focus on services instead of on the product in collaborations. Udita Banerjee from RBS and Christian Hull from BNY Mellon also painted a welcoming environment in which ambitious startups can succeed.

4. Fintech v Techfin - Big Tech’s data riches

During day one’s panel on FS disruption - Stephanie Waismann, CCO from Barclays, outlined the key advantage for incumbents exists in the data which they hold. This may be true, but when it comes to the amount of data available - Big Tech’s GAFA (Google, Apple, Facebook and Amazon) are unrivalled. They appear poised to grab a substantial slice of the pie by leveraging their deep customer relationships and knowledge with which to offer financial products.

Joan Cuko, mobile payments analyst form Vodafone, examined Big Tech’s role in the future of FS, classifying them as frenemies and (again) collaboration was highlighted as being the ‘best path for long-term growth’.

The positioning of fintech challengers to align closer towards the tech part of what they do indicates the importance they attach to this side of their business. Noam Zeigerson, Chief Data Officer at Tandem Bank, poses the rhetorical question of whether they are either a bank or a tech player. Likewise, Tide’s Laurence Krieger sees fintechs as primarily being tech players and financial services providers second.

5. Open Banking’s untapped potential

Open Banking’s enabling technology APIs allow non-financial businesses to offer financial services to their clients based on customer banking data is recognised as having revolutionising potential.

Sam Overton, Head of Commercial at Bud, discussed the latent power behind Open Banking and how banks and fintechs can approach end consumers to empower them and prove that their data and privacy are not at risk. He emphasised the need for alignment on value issues where it is being approached in a one-fits-all approach and that for real traction there needs to be segmentation for individual user groups such as young parents.

6. Blockchain’s here to stay?

Although DLT or blockchain’s components – such as cryptographic hashes, distributed databases and consensus-building – are not new. When combined, they create a powerful new form of data sharing and asset transfer, capable of eliminating intermediaries, central third parties and expensive reconciliation processes.

Monday’s panel included Keith Bear, Fellow of The Cambridge Judge Business School, whose ‘2nd Global Blockchain Benchmarking Study’ found:

The Banking, Financial Markets and Insurance industries are responsible for the largest share of live (enterprise blockchain) networks.

Manu Manchal, Managing Director of Consensys, observes the FS landscape as one of price compression and increased competition with blockchain essential to contributing to the ultimate ‘goal of bringing the cost of providing financial products to zero’. Noam Zeigerson from Tandem Bank regards blockchain as the ‘most transformative technology of all time, only surpassed by the internet’. Soren Mortensen, Director of Global Financial Markets at IBM, summed things up nicely when it comes to blockchain in FS: ‘everyone acknowledges that the technology is proven – what’s needed is greater value propositions and use cases’.

7. For Cryptocurrencies - the jury’s out

OKCoin’s Head of Europe, Gabor Nguy, set out the case for cryptocurrency’s role within FS. While the benefits of blockchain for improving banks’ processes are accepted, cryptocurrency is still seen somewhat as the unruly upstart within much of the financial family. Gabor argued with broad agreement that ‘digital assets are here to stay’ – only this week a UK legal panel has defined their recognition of both digital assets and smart contracts.

Again, when it comes to the adoption of digital assets, Europe is some way behind in terms of mass adoption when compared to Asia. Our Technical Lead, Dominic Perini’s recent blog post discusses the psychology of digital asset ownership and provenance.

8. AI is the technology ‘must-have’ for banks

Whereas for many, blockchain is the technology which holds the biggest potential, the panel on the second day highlighted the role of AI. The global technology research firm, Gartner, estimates that the blockchain market will be worth roughly $23 billion by 2023, as a way of comparison the estimated business value created by AI is $3.9 trillion for 2022. Jeremy Sosabowski, who works with AI for risk analysing at Algo Dynamix, sees it as not being a nice-to-have, but a ‘must-have for banking’.

The biggest potential for AI lies around middle-office banking involving compliance and risk. Angelique Assaf spoke about Cedar Rose’s use of AI Powered Models for Credit Risk Assessments with a clear message that: ‘Data is our asset and AI is our tool’.

The key consideration for implementing AI was again focused on the question of To Build or To Buy. With 60 percent of AI talent being absorbed into the tech and FS sectors, Nils Mork-Ulnes, Head of Strategy at AIG, was on hand to provide valuable insight into building an AI centred product team.

Soren Mortensen from IBM framed things as AI not actually existing (not until at least 2050) – and what we currently have is Augmented Intelligence. He identified the key driver for successful AI implementation as data availability – it’s quality and appropriateness.

Of course, the high profile fallout involving the Goldman Sachs backed Apple Credit Card provides a timely reminder of the nascent nature of both cross-industry collaboration and the use of AI in the form of blackbox algorithms.

9. The human element is still key to success

Getting specialist skills, both technical and non-technical, is generally recognised as one of the biggest challenges in the fintech space. Laurence Krieger from Tide explained that, in his experience, the very best graduates now want to move into the fintech/startup world. Meanwhile, Stephanie Waismann from Barclays observed that many banks are looking to recruit form outside of the banking and FS industry for a broader range of skills and are going towards students and universities to help fill gap areas.

10. Regulatory compliance must be at the centre of planning

Emerging international standards have mostly taken the form of high-level principles, leaving national implementation (both regulation and supervision) to diverge considerably across jurisdictions and across different FS sectors. The Financial Conduct Authority (FCA) is a key influence in setting standards of regulation globally.

On hand to advise eager young startups on the importance of regulatory compliance was Ross Paolino, General Counsel at Western Union. Of course, FS is one of the most highly regulated industries and compliance is becoming of increasing importance whether that be for startups or large institutions.

Agreed upon by all was the fact that tech simply moves faster than regulators can regulate – and that the gap will always exist to a greater or lesser extent. He directs us towards sandboxes as the best options for new ideas to be tested.

Concluding Thoughts

Similarly to what we have witnessed with publishing, financial services are made up of information rather than physical goods and are therefore seen as one of the industries most vulnerable to disruption by software technology.

Of course, not all fintech startups are out to hurt banks, and in fact, many services use legacy platforms to bring them more customers. For incumbents the costs of digitisation are substantial, partnering with specialist vendors is the most efficient way to approach implementing changes to each technology layer. Trying to connect the dots between different parts of the community here in the UK and reaching out as a portal for organisations worldwide is Fintech Alliance who are backed by the Department for International trade. They aim to provide access to people, firms and information, including connections to investors, policy and regulatory updates, and the ability to attract and hire workers.

In the end, the talks from the stage and taking place during the networking sessions at the World Fintech Forum lived up to expectations and left an overall impression of positivity around the fintech ecosystem.

To keep up to date with what we are doing in the fintech space involving building scalable, fault-tolerant systems you should subscribe for updates here.

We thought you might also be interested in

What We Do in the Financial Services Sector

Enterprise Blockchain’s Big Questions

Which Companies Use Erlang and Why?

Permalink

Testing Textually Composed Numbers for Primality

Last night on Twitter one of my favorite accounts, @fermatslibrary, put up an interesting post:

Start at 82 and write it down, then 81, write that down, etc. until you reach one, and you’ve written a (huge) prime number. Wow!

This seemed so strange to me, so of course I got curious:

After some doodling around I wrote a script that checks whether numbers constructed by the method above are prime numbers when the number construction is performed in various numeric bases (from 2 to 36, limited to 36 because for now I’m cheating using Erlang’s integer_to_list/2). It prints the results to the screen as the process handling each base finishes and writes a text file “texty_primes-result-[timestamp].eterms” at the end for convenience so you can use file:consult/1 on it later to mess around.

There are a few strange aspects to large primes, one of them being that checking whether or not they are prime can be a computationally intense task (and nobody knows a shortcut to this). To this end I wrote the Miller-Rabin primality test into the script and allow the caller to decide how many rounds of Miller-Rabin to run against the numbers to check them. So far the numbers that have come out have matched what is expected, but once the numbers get extremely large (and they get pretty big in a hurry!) there is only some degree of confidence that they are really prime, so don’t take the output as gospel.

I wrote the program in Erlang as an escript, so if you want to run it yourself just download the script and execute it.
The script can be found here: texty_primes

A results file containing the (very likely) prime constructions in bases 2 through 36 using “count-back from X” where X is 1 to 500 can be found here: texty_primes-result-20191121171755.eterms
Analyzing from 1 to 500 in bases 2 through 36 took about 25 minutes on a mid-grade 8-core system (Ryzen5). There are some loooooooooong numbers in that file… It would be interesting to test the largest of them for primality in more depth.

(Note that while the script runs you will receive unordered “Base X Result” messages printed to stdout. This is because every base is handed off to a separate process for analysis and they finish at very different times somewhat unpredictably. When all processing is complete the text file will contain a sorted list of {Base, ListOfPrimes} that is easier to browse.)

An interesting phenomenon I observed while doing this is that some numeric bases seem simply unsuited to producing primes when numbers are generated in this manner, bases that themselves are prime numbers in particular. Other bases seem to be rather fruitful places to search for this phenomenon.

Another interesting phenomenon is the wide number of numeric bases in which the numbers “21”, “321”, “4321” and “5321” turn out to be prime. “21” and “4321” in particular turn up quite a lot.

Perhaps most strangely of all is that base 10 is not a very good place to look for these kinds of primes! In fact, the “count back from 82” prime is the only one that can be constructed starting between 1 and 500 that I’ve found. It is remarkable that anyone discovered that at all, and also remarkable that it doesn’t happen to start at 14,562 instead of 82 — I’m sure nobody would have noticed this were any number much higher than 82 the magic starting point for constructing a prime this way.

This was fun! If you have any insights, questions, challenges or improvements, please let me know in the comments.

Permalink

Supercharge Your Elixir and Phoenix Navigation with vim-projectionist

If you came to Elixir from Rails-land, you might miss the navigation that came with vim-rails. If you’re not familiar with it, vim-rails creates commands like :A, :AV, :AS, and :AT to quickly toggle between a source file and its test file and commands like :Econtroller, :Emodel, and :Eview to edit files based on their type.

The good news is that the same person who made vim-rails also made vim-projectionist (thanks Tim Pope). And with it, we can supercharge our navigation in Elixir and Phoenix just like we had in Rails with vim-rails.

Projecting Back to the Future

The easiest way to use vim-projectionist is to set up projections in a .projections.json file at the root of your project. This is a basic file for Elixir projections:

{
  "lib/*.ex": {
    "alternate": "test/{}_test.exs",
    "type": "source"
  },
  "test/*_test.exs": {
    "alternate": "lib/{}.ex",
    "type": "test"
  }
}

With this configuration, projectionist allows us to alternate between test and source files using :A, and it can open that alternate file in a separate pane with :AS or :AV, or if you’re a tabs person, in a separate tab with :AT. Note that we define the "alternate" both ways so that both the source and test files have alternates.

If you’re wondering how it works, projectionist is grabbing any directory and files matched by * — from a globbing perspective it acts like **/* — and expanding it with {}. So the alternate of lib/project/sample.ex is test/project/sample_test.exs (and vice versa).

With that simple configuration, projectionist also defines two :E commands based on the "type":

  • :Esource project/sample will open lib/project/sample.ex, and
  • :Etest project/sample will open test/project/sample_test.exs.

Pretty neat, right? But wait! There’s more.

Templating

Projectionist has another really interesting feature — defining templates to use when creating files. Add the following templates to each projection:

{
  "lib/*.ex": {
    "alternate": "test/{}_test.exs",
    "type": "source",
+   "template": [
+     "defmodule {camelcase|capitalize|dot} do",
+     "end"
+   ]
  },
  "test/*_test.exs": {
    "alternate": "lib/{}.ex",
    "type": "test",
+   "template": [
+     "defmodule {camelcase|capitalize|dot}Test do",
+     "  use ExUnit.Case, async: true",
+     "",
+     "  alias {camelcase|capitalize|dot}",
+     "end"
+   ]
  }
}

The "template" key takes an array of strings to use as the template. In them, projectionist allows us to define a series of transformations that will act upon whatever is captured by *. We use {camelcase|capitalize|dot}, so if * captures project/super_random, projectionist will do the following transformations:

  • camelcase: project/super_random -> project/superRandom,
  • capitalize: project/superRandom -> Project/SuperRandom,
  • dot: Project/SuperRandom -> Project.SuperRandom

Example workflow

Let’s put it all together in a sample MiddleEarth project.

We can create a new file via :Esource middle_earth/minas_tirith. It will create a file lib/middle_earth/minas_tirith.ex with this template:

defmodule MiddleEarth.MinasTirith do
end

We can then create a test file by attempting to navigate to the (non-existing) alternate file. Typing :A will give us something like this:

Create alternate file?
1 /dev/middle_earth/test/middle_earth/minas_tirith_test.exs
Type number and <Enter> or click with mouse (empty cancels):

Typing 1 and <Enter> will create the test file test/middle_earth/minas_tirith_test.exs with this template:

defmodule MiddleEarth.MinasTirithTest do
  use ExUnit.Case, async: true

  alias MiddleEarth.MinasTirith
end

Here it is in gif form:

gif of the flow we just talked about

Very cool, right? But wait. There’s more.

Supercharge Phoenix Navigation

That simple configuration works for Elixir projects. And since Phoenix projects (beginning with Phoenix 1.3) have their files under lib/, it also works okay for Phoenix projects.

But without further changes, creating a Phoenix controller or a Phoenix channel will gives us an extra Controllers or Channels namespace in our modules because of the directory structure. For example, creating lib/project_web/controllers/user_controller.ex will create a module ProjectWeb.Controllers.UserController instead of the desired ProjectWeb.UserController.

It would also be nice to have controller-specific templates that include use ProjectWeb, :controller in controllers and use ProjectWeb.ConnCase in controller tests (since we always need those use declarations). And, it would be extra nice to have access to an :Econtroller command.

We can make that happen by adding Phoenix-specific projections to our .projections.json file. Start with controllers:

{
  "lib/**/controllers/*_controller.ex": {
    "type": "controller",
    "alternate": "test/{dirname}/controllers/{basename}_controller_test.exs",
    "template": [
      "defmodule {dirname|camelcase|capitalize}.{basename|camelcase|capitalize}Controller do",
      "  use {dirname|camelcase|capitalize}, :controller",
      "end"
    ]
  },
  "test/**/controllers/*_controller_test.exs": {
    "alternate": "lib/{dirname}/controllers/{basename}_controller.ex",
    "type": "test",
    "template": [
      "defmodule {dirname|camelcase|capitalize}.{basename|camelcase|capitalize}ControllerTest do",
      "  use {dirname|camelcase|capitalize}.ConnCase, async: true",
      "end"
    ]
  },
  # ... other projections
}

Note that these projections no longer use the single * matcher for globbing. They use ** and * separately. And instead of simply using {} in alternate files, they explicitly use {dirname} and {basename}.

Why the change? Here’s what the projectionist documentation says:

For advanced cases, you can include both globs explicitly: "test/**/test_*.rb". When expanding with {}, the ** and * portions are joined with a slash. If necessary, the dirname and basename expansions can be used to split the value back apart.

Controller templates

By separating the globbing, we are able to create templates that do not include the extra Controllers namespace even though the path includes /controllers.

We get the project name with **, and we get the file name after /controllers with *_controller.ex. We then generate the namespace ProjectWeb by grabbing dirname (i.e. project_web) and putting it through a series of transformations. Similarly, we generate the rest of the module’s name by using basename, putting it through a series of transformations, and appending either Controller or ControllerTest.

We are also able to create more helpful controller templates since the projections are specific to controllers. Note the inclusion of " use {dirname|camelcase|capitalize}, :controller" and " use {dirname|camelcase|capitalize}.ConnCase, async: true" in our templates. Our controllers will now automatically include use ProjectWeb, :controller and our controller tests will automatically include use ProjectWeb.ConnCase, async: true.

:Econtroller command

Finally, we set the "type": "controller". That gives us the :Econtroller command. We can now create a controller with :Econtroller project_web/user. And for existing controllers, projectionist has smart tab completion. So typing :Econtroller user and hitting tab should expand to :Econtroller project_web/user or give us more options if there are multiple matches.

For example, in the MiddleEarth project we can edit the default PageController that ships with Phoenix by using :Econtroller page along with tab completion. And we can create a new MinasMorgul controller and controller test with our fantastic templates by typing :Econtroller middle_earth_web/minas_morgul and then going to its alternate file.

gif of using :Econtroller to open page controller

Projecting All the Things

I think you get the gist of it, so I will not go through all the projections. But just like we added the projections for the controllers, we can do the same for views, channels, and even feature tests if you frequently write those.

Below I included a sample file to get you started with controllers, views, channels, and feature tests. Take a look at it. If you prefer it in github-gist form, here’s a link to one. The best thing is that if my sample file does not fit your needs, you can always adjust it!

If you find any improvements, I would love to hear about them. I’m always looking for better ways to navigate files.

{
  "lib/**/views/*_view.ex": {
    "type": "view",
    "alternate": "test/{dirname}/views/{basename}_view_test.exs",
    "template": [
      "defmodule {dirname|camelcase|capitalize}.{basename|camelcase|capitalize}View do",
      "  use {dirname|camelcase|capitalize}, :view",
      "end"
    ]
  },
  "test/**/views/*_view_test.exs": {
    "alternate": "lib/{dirname}/views/{basename}_view.ex",
    "type": "test",
    "template": [
      "defmodule {dirname|camelcase|capitalize}.{basename|camelcase|capitalize}ViewTest do",
      "  use ExUnit.Case, async: true",
      "",
      "  alias {dirname|camelcase|capitalize}.{basename|camelcase|capitalize}View",
      "end"
    ]
  },
  "lib/**/controllers/*_controller.ex": {
    "type": "controller",
    "alternate": "test/{dirname}/controllers/{basename}_controller_test.exs",
    "template": [
      "defmodule {dirname|camelcase|capitalize}.{basename|camelcase|capitalize}Controller do",
      "  use {dirname|camelcase|capitalize}, :controller",
      "end"
    ]
  },
  "test/**/controllers/*_controller_test.exs": {
    "alternate": "lib/{dirname}/controllers/{basename}_controller.ex",
    "type": "test",
    "template": [
      "defmodule {dirname|camelcase|capitalize}.{basename|camelcase|capitalize}ControllerTest do",
      "  use {dirname|camelcase|capitalize}.ConnCase, async: true",
      "end"
    ]
  },
  "lib/**/channels/*_channel.ex": {
    "type": "channel",
    "alternate": "test/{dirname}/channels/{basename}_channel_test.exs",
    "template": [
      "defmodule {dirname|camelcase|capitalize}.{basename|camelcase|capitalize}Channel do",
      "  use {dirname|camelcase|capitalize}, :channel",
      "end"
    ]
  },
  "test/**/channels/*_channel_test.exs": {
    "alternate": "lib/{dirname}/channels/{basename}_channel.ex",
    "type": "test",
    "template": [
      "defmodule {dirname|camelcase|capitalize}.{basename|camelcase|capitalize}ChannelTest do",
      "  use {dirname|camelcase|capitalize}.ChannelCase, async: true",
      "",
      "  alias {dirname|camelcase|capitalize}.{basename|camelcase|capitalize}Channel",
      "end"
    ]
  },
  "test/**/features/*_test.exs": {
    "type": "feature",
    "template": [
      "defmodule {dirname|camelcase|capitalize}.{basename|camelcase|capitalize}Test do",
      "  use {dirname|camelcase|capitalize}.FeatureCase, async: true",
      "end"
    ]
  },
  "lib/*.ex": {
    "alternate": "test/{}_test.exs",
    "type": "source",
    "template": [
      "defmodule {camelcase|capitalize|dot} do",
      "end"
    ]
  },
  "test/*_test.exs": {
    "alternate": "lib/{}.ex",
    "type": "test",
    "template": [
      "defmodule {camelcase|capitalize|dot}Test do",
      "  use ExUnit.Case, async: true",
      "",
      "  alias {camelcase|capitalize|dot}",
      "end"
    ]
  }
}

Permalink

The Changeset API Pattern

The Changeset API Pattern

Over time, as you gain overall experience with software development, you start noticing some paths that can lead to much more smooth sailing. Those are called design patterns, formalized best practices that can be used to solve common problems when implementing a system.

One of these patterns that I am having great success while working on web applications in Elixir, is what I am calling, for the lack of a better name, the Changeset API Pattern.

Before I start with the pattern itself, I'd like to outline some information that I consider as the motivation behind the usage, and it is called Data Integrity.

Data integrity is the maintenance of, and the assurance of the accuracy and consistency of, data over its entire life-cycle, and is a critical aspect to the design, implementation and usage of any system which stores, processes, or retrieves data.
The overall intent of any data integrity technique is the same: ensure data is recorded exactly as intended. In short, data integrity aims to prevent unintentional changes to information. Data integrity is not to be confused with data security, the discipline of protecting data from unauthorized parties.

Overall Goal

Facilitate the data integrity main goal, ensure that data is recorded exactly as intended.

The Changeset API Pattern is not the sole responsible for achieving this goal, however, once used in conjunction with some data modeling best practices such as column types and constraints, default values and so on, the pattern will become an important application layer on top of an already established data layer, aiming for an overall better data integrity.

Database Data Integrity

As mentioned above, having good database specifications will facilitate data integrity. In Elixir, this is commonly achievable through Ecto, the most common component to interact with application data stores, through Ecto Migration DSL:

defmodule Core.Repo.Migrations.CreateUsersTable do
  use Ecto.Migration

  def change do
    create table(:users) do
      add :company_id, references(:companies, type: :binary_id), null: false
      add :first_name, :string, null: false
      add :last_name, :string, null: false
      add :email, :string, null: false
      add :age, :integer, null: false
      timestamps()
    end

    create index(:users, [:email], unique: true)
    create constraint(:users, :age_must_be_positive, check: "version > 0")
  end
end

In the migration above we are specifying:

  • column data types;
  • columns can't have null values;
  • company_id is a foreign key;
  • email column is unique;
  • age has to be greater than zero.

Depending on your datastore and column type you can apply a variety of data constraints to fulfill your needs. Ideally, the specifications defined in the migration should align with your Ecto Schema and generic changeset:

defmodule Core.User do
  use Ecto.Schema
  import Ecto.Changeset
  
  alias Core.Company

  @primary_key {:id, :binary_id, autogenerate: true}
  @timestamps_opts [type: :utc_datetime]
  schema "users" do
    belongs_to(:company, Company, type: :binary_id)
    field(:first_name, :string)
    field(:last_name, :string)
    field(:email, :string)
    field(:age, :integer)
    timestamps()
  end
  
  @required_fields ~w(company_id first_name last_name email age)a
  
  def changeset(struct, params) do
    struct
    |> cast(params, @required_fields)
    |> validate_required(@required_fields)
    |> validate_number(:age, greater_than: 0)
    |> unique_constraint(:email)
    |> assoc_constraint(:company)
  end
end

Those should be considered your main gate in terms of data integrity as it is ensuring data only will be stored if all checks pass. From there you can have other layers on top, for example, the Changeset API Pattern.

The Changeset API Pattern

Once you have a good foundation, it is time to tackle your application API scenarios regarding data integrity. While a generic changeset, as above, is sufficient to ensure that the data integrity matches what is defined in the database in a general sense (all inserts and all updates), usually not all changes are equal from the application standpoint.

The Problem

For example, let's assume that besides the existing columns in the users table example  above, we also have a column called encrypted_password for user authentication. In our application, we have the following endpoints in our API that modify data:

  • Register User;
  • Update User Profile;
  • Change User Password.

Having a generic changeset in our schema will allow all these three operations to happen as desired, however, it opens some data integrity concerns for the two update operations:

  • While updating my first name as part of Update User Profile flow, I also can change my password;
  • While changing my password as part of Change User Password flow, I can update my age.

As long as the fields are conforming with the generic changeset validations, these unexpected changes will be allowed. You can remedy this behavior by applying filters in your API or your controller, however, this will become brittle once your application evolves. Other than that, Ecto.Schema and Ecto.Changeset modules provide lots of functions for field validation, casting and database constraint checks, not leveraging them would require lots of code duplication, at least in terms of functionality.

The Solution

The Changeset API Pattern states that:

For each API operation that modifies data, a specific Ecto Changeset is implemented, making it explicit the desired changes and all validations to be performed.

Instead of a generic changeset, we will implement three changesets with a very clear combination for cast, validation and database constraint checks.

Register User Changeset

defmodule Core.User do
  # Code removed

  schema "users" do
    # Code removed
    field(:hashed_password, :string)
    # Code removed
  end

  @register_fields ~w(company_id first_name last_name email age hashed_password)a

  def register_changeset(struct, params) do
    struct
    |> cast(params, @register_fields)
    |> validate_required(@register_fields)
    |> validate_number(:age, greater_than: 0)
    |> unique_constraint(:email)
    |> assoc_constraint(:company)
  end
end

Update User Profile Changeset

defmodule Core.User do
  # Code removed

  @update_profile_fields ~w(first_name last_name email age)a

  def update_profile_changeset(struct, params) do
    struct
    |> cast(params, @update_profile_fields)
    |> validate_required(@update_profile_fields)
    |> validate_number(:age, greater_than: 0)
    |> unique_constraint(:email)
  end
  
  # Code removed
end

Change User Password Changeset

defmodule Core.User do
  # Code removed

  @change_password_fields ~w(hashed_password)a

  def change_password_changeset(struct, params) do
    struct
    |> cast(params, @change_password_fields)
    |> validate_required(@change_password_fields)
  end
end

In your API functions, even if extra data comes in, you are safe because the intent and output expectation of each operation is already defined in the closest point to the data store interaction from the application standpoint, in our case, in the schema definition module.

Caveat

One thing that I noticed when I started implementing this pattern is the fact that sometimes I was doing a little more than my initial intent within the changeset functions.

Instead of performing the data type casting, validations and database checks, in a few cases, I was also setting the field value. For the sake of illustration only but it can be anything along these lines, let's take an example of a user schema, that has a column verified_at that is nullable when the user is registered, but it will store the date and time the user was verified.

The changeset for this operation would only allow verified_at field to be cast with the proper data type, but beyond that, the current date and time were set in the changeset using Ecto.Changeset.put_change/3.

Instead, what should be done is to delegate to the API the responsibility to set the value for verified_at, that value would be later validated in the changeset as any other update.

Another common example is encrypting the plain text password (defined as a virtual field) during user registration or password change inside the schema module. The schema should not need to know about encryption hashing libraries, modules or functions, and that should be delegated to the API functions.

There is nothing wrong with Ecto.Changeset.put_change/3, in some cases it makes sense to use it, for values that can't come through the API for any reason, if you need a mapping between the value sent via API and your datastore, or if you need to nullify a field.

Advantages

  • pushes data integrity concerns upfront in the development process;
  • protects the schema against unexpected data updates;
  • adds explicitness for allowed data changes and checks to be performed per use-case;
  • complements the commonly present data integrity checks in schema modules with use-cases checks;
  • leverages Ecto.Schema and Ecto.Changeset functions for better data integrity overall;
  • concentrate all data integrity checks in one single place, and in the best place, the schema module;
  • simplifies data changes testing per use-case;
  • simplifies input data handling in the API functions or controller actions.

Disadvantages

  • adds extra complexity in the schema modules;
  • can mislead to handle more than data integrity in the schema modules, as mentioned in the caveats session.

When the pattern is not needed

Even this pattern presents itself to me as a great way to achieve better data integrity, there is one scenario that I find myself skipping it:

  • usually, the entity (model) is much simpler;
  • the API only provides two types of change (create and a generic update);
  • both create and update require same data integrity checks.

Conclusion

Data is a very important asset in any software application and data integrity is a critical component to achieve data quality. The benefits of using this pattern so far are giving me much more reliability and control regarding the data handled by my applications nowadays. Other than that, it is making me think ahead in the development process regarding how I structure the data and how the application interacts with them.

Permalink

Mocking and faking external dependencies in elixir tests

During some recent work on thoughtbot’s company announcement app Constable, I ran into a situation where I was introducing a new service object that made external requests. When unit testing, it is easy enough to use some straightforward mocks to avoid making external requests. However, for tests unrelated to the service object, how can we mock out external requests without littering those tests with explicit mocks?

Unit testing using mocks

To set the stage, let’s take a look at what a hypothetical service object that makes an external request might look like:

defmodule App.Services.WebService do
  def make_request do
    HTTPoison.get!("http://thoughtbot.com/")
  end
end

And if we were to write a unit test for this service object, we would want to mock out the external request (in this case, the call to HTTPoison.get!/1). To do that, we might use a library like Mock:

defmodule App.Services.WebServiceTest do
  import Mock
  alias App.Services.WebService

  describe "#make_request" do
    test ". . ."
      # setup . . .

      get_mock = fn _url, _params, _headers ->
        %HTTPoison.Response{
          body: ". . .",
          status_code: 200
        }
      end

      response =
        Mock.with_mock HTTPoison, get!: get_mock do
          WebService.make_request()
        end

      # assertions . . .
    end
  end
end

Where it gets tricky

Mocking is exactly what we want when unit testing the service object, but if we have an unrelated unit tests that run code which happens to use our service object, we want to ensure that no external requests are made when running our test suite.

As an example, we might have a module that utilizes our service object:

defmodule SomeModule
  alias App.Services.WebServiceTest

  def do_something
    . . .
    response = WebServiceTest.make_request()
    . . .
  end
end

If we were testing this object and in our test we called SomeModule.do_something/0, we would inadvertently be making an external request. It would be incorrect to mock HTTPoison.get!/1 in this test because that’s an implementation detail of our service object. And while we could mock WebServiceTest.make_request/0, that will lead to a lot of noise in our tests.

Let’s create a fake

One way we can get around this issue is to create a fake version of our service object which has the same public interface, but returns fake data. That object might look like:

defmodule App.Services.FakeWebService do
  def make_request do
    %HTTPoison.Response{body: ". . .", status_code: 200}
  end
end

We want to utilize this fake by making our application code use WebService unless we are testing, in which case we want to use FakeWebService.

A common way to accomplish this is to have three modules: WebService, WebServiceImplementation, and WebServiceFake. Everyone calls methods on WebService which then delegates to WebServiceImplementation when not testing, or to WebServiceFake when testing. I don’t particularly like this pattern, because it requires an extra object and introduces complexity.

A much more simple and flexible solution is to use a form of dependency injection where we dynamically refer to our service object which is either the real service or the fake. There is a great elixir module called pact which accomplishes this by creating a dependency registry which allows us to define named dependencies, but conditionally switch out the actual value they resolve to.

Using pact, we define a dependency registry for our application:

defmodule App.Pact do
  use Pact
  alias App.Services.WebService

  register :web_service, WebService
end

App.Pact.start_link

And then we want to redefine that dependency to be our fake when we run our tests. The following code, either in a test helper or in some setup that occurs before all tests, will accomplish that:

App.Pact.register(:web_service, FakeWebService)

Finally, all calls to WebService.make_request() in our application and tests become App.Pact.get(:web_service).make_request(). The one exception to this is in our unit test for WebService itself - we want to test the actual service object! So we should still explicitly call WebService.make_request().

Keeping the fake up to date

This approach is good, but there is one problem: if the public interface of our real service object changes, we also have to update the fake. This may be acceptable; after all, it would likely cause a runtime test failure if the public interface of the fake differed from the real service object. But there is an easy way to make the compiler do more work for us.

Using behaviors we can specify a public interface that both the real service object and the fake must conform to. This can give us more confidence that we’re always keeping the two in sync with each other.

Let’s define a module describing the behavior of our service object:

defmodule App.Services.WebServiceProvider do
  @callback make_request() :: HTTPoison.Response.t()
end

And then we can adopt that behavior in our service object and fake:

defmodule App.Services.WebService do
  alias App.Services.WebServiceProvider
  @behavior WebServiceProvider

  @impl WebServiceProvider
  def make_request do
    HTTPoison.get!("http://thoughtbot.com/")
  end
end

. . .

defmodule App.Services.FakeWebService do
  alias App.Services.WebServiceProvider

  @impl WebServiceProvider
  def make_request do
    %HTTPoison.Response{body: ". . .", status_code: 200}
  end
end

Final thoughts

This approach is great when we want pure unit testing, and our goal is to avoid any external requests. The pact library even allows us to replace dependencies in a block, rather than permanently:

App.Pact.replace :web_service, FakeWebService do
  . . .
end

This can be an invaluable alternative to mocking a dependency for all tests, and may be preferable if we want to be very explicit about what is mocked in each test while still allowing us to easily make use of our fake.

For integration tests or more complex services where we want to test the full service-to-service interaction, we may want to consider building our own mock server instead of replacing the external service with a fake.

Permalink

Copyright © 2016, Planet Erlang. No rights reserved.
Planet Erlang is maintained by Proctor.