Building An Image Upload API With Phoenix

File Upload with Phoenix Logo in the background

For many API applications, there comes a time when the application needs to save images uploaded to the server either locally or on a CDN. Luckily for us, Elixir and Phoenix provide the tools we need to build a simple image upload API.

The Simple API

Let's define exactly how this API is supposed to work:

  • accept a request containing a base64 encoded image as a field
  • preserve the image extension by reading the image binary
  • upload the image to Amazon's S3
  • provide the URL to the image on S3 in the response

Update Your Dependencies

To assist us with uploading images to S3, we will use ExAws to interact with the AWS API, sweet_xml for XML parsing, and UUID to help generate random IDs. Update your mix.exs file to include both libraries as dependencies.

1
2
3
4
5
6
7
8
def deps do
  [
    ...,
    {:ex_aws, "~> 1.1"},
    {:sweet_xml, "~> 0.6.5"},
    {:uuid, "~> 1.1"}
  ]
end

Also, make sure to update your application list if you're using Elixir 1.3 or lower.

1
2
3
4
5
6
7
8
9
10
11
12
def application do
  [
    applications: [
      ...,
      :ex_aws,
      :hackney,
      :poison,
      :sweet_xml,
      :UUID
    ]
  ]
end

Lastly, include your AWS credentials in your config.exs.

1
2
3
config :ex_aws,
  access_key_id: ["ACCESS_KEY_ID", :instance_role],
  secret_access_key: ["SECRET_ACCESS_KEY", :instance_role]

The AssetStore "Context"

Before we create the controller, let's define the application logic in a separate module that is specific for handling uploaded assets. For our application, we are only going to support JPEG and PNG files. With a name like AssetStore, we can add additional file types in the future but use the same context.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
defmodule MyApp.AssetStore do
  @moduledoc """
  Responsible for accepting files and uploading them to an asset store.
  """

  import SweetXml
  alias ExAws.S3

  @doc """
  Accepts a base64 encoded image and uploads it to S3.

  ## Examples

      iex> upload_image(...)
      "https://image_bucket.s3.amazonaws.com/dbaaee81609747ba82bea2453cc33b83.png"

  """
  @spec upload_image(String.t) :: s3_url :: String.t
  def upload_image(image_base64) do
    # Decode the image
    {:ok, image_binary} = Base.decode64(image_base64)

    # Generate a unique filename
    filename =
      image_binary
      |> image_extension()
      |> unique_filename()

    # Upload to S3
    {:ok, response} = 
      S3.put_object("image_bucket", filename, image_binary)
      |> ExAws.request()

    # Return the URL to the file on S3
    response.body
    |> SweetXml.xpath(~x"//Location/text()")
    |> to_string()
  end

  # Generates a unique filename with a given extension
  defp unique_filename(extension) do
    UUID.uuid4(:hex) <> extension
  end

  # Helper functions to read the binary to determine the image extension
  defp image_extension(<<0x89, 0x50, 0x4E, 0x47, 0x0D, 0x0A, 0x1A, 0x0A, _::binary>>), do: ".png"
  defp image_extension(<<0xff, 0xD8, _::binary>>), do: ".jpg"
end

Designing the Controller

Create a new controller responsible for images. We simply need to call our module that we previously made.

1
2
3
4
5
6
7
8
9
10
11
defmodule MyApp.ImageController do
  use MyApp.Web, :controller

  def create(conn, %{"image" => image_base64}) do
    s3_url = MyApp.AssetStore.upload(image_base64)

    conn
    |> put_status(201)
    |> json(%{"url" => s3_url})
  end
end

Now let's go update our router to include the new route in our API.

1
2
3
4
5
6
scope "/api", MyApp do
  ...

  # Our new images route
  resources "/images", ImageController, only: [:create]
end

Our application is now ready to accept images!

Try It Out

We can easily try out our new API by hitting up our terminal for a quick run-through with cURL. We can try uploading a 1x1 transparent PNG file.

1
2
3
4
5
6
7
curl -X "POST" "http://localhost:4000/api/images" \
     -H "Content-Type: application/json" \
     -d $'{
  "image": "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNk+M9QDwADhgGAWjR9awAAAABJRU5ErkJggg=="
}'

{"url": "https://image_bucket.s3.amazonaws.com/dbaaee81609747ba82bea2453cc33b83.png"}

Wrap Up

As we can see, Elixir and Phoenix provide the tools to add an API to accept base64 encoded image uploads with very little code. Be sure to read the docs of the dependencies we leveraged.

Permalink

On “Free Speech” and the internet

We’ve recently seen Cloudflare decide to terminate service for “The Daily Stormer”, because the stormer is seen as a Neo-Nazi site. There is an ongoing internet debate which tries to untangle if this is a valid or invalid decision. Both are right. Let me explain:

The solution I would propose is the same as one by Dan Geer (see [0]). Any internet company has a choice between two options with respect to their net neutrality on the internet:

  • Either, you are a carrier and enjoy common carrier protections. You move data around for other people. As a common carrier, you can not be held liable for what is carried in “your network” or over “your service”. But — as a common carrier you are not allowed to inspect the data you are carrying.
  • Or, you are allowed to do (deep) inspection of the data you carry, charge differently for different types of data, reject some data over other data, or otherwise interfere with the normal operation. But you can thus be held liable for the data you are eventually carrying, since there is no way you can claim that “you did not know about the contents.”

The key observation is: you can’t get both at the same time.

Cloudflare must decide if they happen to be a carrier in the above scheme, or if they want to be held liable for the traffic they carry. If not, it would open them up to DMCAs, for instance, since they aren’t allowed to just protect themselves under the carrier label. If they are a carrier, they also have ample protection against people who disagree with the Daily Stormer: “we are just a carrier, your complaints should go to someone else if you think the Stormers operation is illegal.” In short, Cloudflare is protected against the political pressure by individuals, lobby groups and so on.

Likewise, an ISP who wants to inspect TCP/IP streams of their customers, build profiles, and sell the profiles to a 3rd party can be held liable for the eventual illegal data transfer of their customers. They inspected the data, so they should know. The fact that it was a machine doing the job is a detail[1].

Google, Facebook, et.al, takes data produced by society and abuses it for machine learning purposes. Under the choice-law, they are now liable, since they inspected the data for another purpose. Of course, they can pay people for access to the data if they want. And they are free to set any price they like on the data. If Google thinks it is worth $3 per year for them, I doubt many people would accept. But make that $3000 or more, and I think a lot of people would do it. The current problem here is the lack of transparency in the market.

The current situation, in which you can pivot between being a carrier or not, is not efficient, nor very productive. In particular, it doesn’t give individuals basic rights, irrespective of their grouping (be it political, ideological or racial, etc). And it doesn’t give companies basic rights either, so they can give in to ideological pressure, one way or the other. There is a fine line between being a (public) utility and a company who can make choices in this regard.

You might argue, from the heart or mind, that a site like “Daily Stormer” shouldn’t be allowed to have any kind of representation at all, but I think this goes against some first principle of free speech. It doesn’t violate any laws, but one has to acknowledge laws are lowest-common-denominators of what can work in practice. They are not morally or ethically superior by definition, because they have to be clear-cut and arbiters of what is right and wrong. You can easily have examples where a tyranny of the majority, or a specific political/ideological leaning, can restrict speech for some out-groups. It happens in the small, for instance by silencing neuro-atypical human beings. It happens in the large as well, for instance by claiming there are certain areas which we as human beings shouldn’t even research (Examples: Global Warming, Race/IQ, Gender Differences, …).

But, rather rarely, the most outrageous speech is some times the truth we don’t want to hear. I think we can recognize that by giving our political or ideological opponents the liberty of expression, we can hope to gain the courtesy of getting it back. At least we have the moral upper hand in demanding that they do so, if you grant me the naïvety of such a proposal at first. On the other hand, cancellation and termination does nothing but breed animosity. And animosity eventually becomes violence in the fullblown scale.

The underlying problem, of course, is that law has not caught up with the internet yet. Politicians still believe they can treat the internet as any other entity, and since most of them don’t really get what it is all about, they create some rather indecent proposals. They often lean entirely to one side, that of government convenience, without giving the citizens an equal amount of rights in the process. Such laws are likely to be opposed.

We need internet laws, and tech laws. Dan Geer’s work is an excellent place from which to spawn the discussion.

[0] Security as Realpolitik: http://geer.tinho.net/geer.blackhat.6viii14.txt

[1] We should also have a law which forces an ISP to convey how much they earn on selling the data, so the cost of your internet becomes transparent.

Permalink

The top 10 Elixir talks of 2017 so far

<p><strong>Want more from the world of Elixir? We’re at <a href="http://bit.ly/2v1Aazb">Elixir.LDN</a> this week, and at <a href="http://bit.ly/2i5RQbF">ElixirConf 2017</a> in September.</strong> </p> <p>2017 has been a great year for Elixir! It turned 5 years old this January, and the language has just gone from strength to strength. </p> <p>It’s also been a fantastic year for sharing of knowledge within the community. Here’s our pick of the most popular Elixir talks of 2017 so far.</p> <h1>GenStage and flow - José Valim</h1> <p><strong>Event:</strong> Lambda Days 2017</p> <p><strong>Speaker:</strong> José Valim, Elixir Creator</p> <p><strong>Level:</strong> Intermediate </p> <p><div style="text-align:center"><iframe width="560" height="315" src="https://www.youtube.com/embed/XPlXNUXmcgE" frameborder="0" allowfullscreen></iframe></div><br/></p> <p>José explores the rationale and design decisions behind GenStage and Flow, two abstractions that have been researched and now implemented in the Elixir programming with a focus on back-pressure, concurrency and data processing.</p> <p><strong><a href="http://bit.ly/2vEfiSd">Lambda days 2018</a> tickets are now available.</strong> </p> <h1>Transforming programming - Dave Thomas</h1> <p><strong>Event:</strong> Erlang and Elixir Factory San Francisco 2017</p> <p><strong>Speaker:</strong> Dave Thomas, Programmer turned publisher (but mostly programmer)</p> <p><strong>Level:</strong> Beginner, Intermediate</p> <p><div style="text-align:center"><iframe width="560" height="315" src="https://www.youtube.com/embed/A76hM3MpEKo" frameborder="0" allowfullscreen></iframe></div><br/></p> <p>When it comes to BEAM, the last thing the community wants to do is break something good. At the same time, it&rsquo;s also a bad idea to stagnate. </p> <p>It is on this basis that Dave Thomas explores what happens if he takes BEAM and starts using it in different ways. In this talk, Dave shares his great thought-provoking way of understanding and teaching programming as a state transition as he experiments with programming by transformation. Does Dave succeed in freeing himself, and the community, from the “tyranny of the program counter”? You’ll have to watch to find out!</p> <h1>Leveling up your Phoenix Projects with OTP - Nico Mihalich</h1> <p><strong>Event:</strong> Lonestar ElixirConf 2017</p> <p><strong>Speaker:</strong> Nico Mihalich, DockYard</p> <p><strong>Level:</strong> Beginner, Intermediate </p> <p><div style="text-align:center"><iframe width="560" height="315" src="https://www.youtube.com/embed/QRdZDcYq9-Y" frameborder="0" allowfullscreen></iframe></div><br/></p> <p>Want to build a fully functional Elixir project, integrated into a Phoenix application? This talk is for you!</p> <p>With Elixir and Phoenix, the toolkit for building web applications has expanded dramatically. Beyond Phoenix&rsquo;s routers and controllers is a whole new world of features and ways to build reliable systems. With this talk, Nico demonstrates how. </p> <p>Over the course of the talk Nico takes a practical approach to building a small application in Elixir, walking through all the steps and describing what language features he takes advantage of and why. Along the way he explores GenServer, OTP, backpressure management, synchronous vs asynchronous calls, a testing strategy, and integration into a Phoenix application. </p> <h1>Taking Elixir to the Metal with Rust - Sonny Scroggin</h1> <p><strong>Event:</strong> NDC London 2017</p> <p><strong>Speaker:</strong> Sonny Scroggin, Phoenix Core Team member</p> <p><strong>Level:</strong> Beginner, Intermediate </p> <p><div style="text-align:center"><iframe width="560" height="315" src="https://www.youtube.com/embed/lSLTwWqTbKQ" frameborder="0" allowfullscreen></iframe></div><br/></p> <p>Elixir is great, we all love Elixir. However, in this talk Sonny is very upfront about Elixir’s speed; it turns out Elixir isn&rsquo;t the fastest kid on the block. And while raw CPU speed matters little for most applications, there does exist a couple reasons you might want want to reach for tools that give you access to native power.</p> <p>In this talk Sonny discusses Native Implemented Functions (NIFs) - Erlang&rsquo;s Foreign Function Interface (FFI).</p> <p>NIFs are normally implemented in C and are considered dangerous. But Sonny explores writing safer NIFs in Rust - a new systems programming language developed by Mozilla, that focuses on memory safety. He also touches on the pitfalls with writing NIFs and how Rust can make this process easier and safer.</p> <h1>ElixirConf EU 2017 Keynote - José Valim</h1> <p><strong>Event:</strong> ElixirConf EU 2017</p> <p><strong>Speaker:</strong> José Valim, creator of Elixir</p> <p><strong>Level:</strong> Beginner, Intermediate, Advanced </p> <p><div style="text-align:center"><iframe width="560" height="315" src="https://www.youtube.com/embed/IZvpKhA6t8A" frameborder="0" allowfullscreen></iframe></div><br/></p> <p>José Valim is the creator of the Elixir programming language and the Director of R&amp;D at Plataformatec, a consultancy firm based in Brazil. He is author of Adopting Elixir and Programming Phoenix as well as an active member of the Open Source community.</p> <p>In this talk, Jose talks us through his journey with Elixir so far, and shares his plans for the language in the coming year. </p> <h1>Elixir and Money - Tomasz Kowal</h1> <p><strong>Event:</strong> ElixirConf EU 2017</p> <p><strong>Speaker:</strong> Tomasz Kowal, Clubcollect</p> <p><strong>Level:</strong> Beginner, Intermediate</p> <p><div style="text-align:center"><iframe width="560" height="315" src="https://www.youtube.com/embed/TZPG8b-Novw" frameborder="0" allowfullscreen></iframe></div><br/></p> <p>Dealing with money should be easy, because crunching numbers is the most basic thing that every computer can do. On the other hand, the cost of a mistake may be quite high. </p> <p>In this talk, Tomasz outlines the properties a financial system needs in terms of CAP theorem and how Elixir fits into a real-life problem domain. He also covers handling rounding errors, designing APIs that gracefully handle network and hardware failures, and usage of &ldquo;let it crash&rdquo; approach in the design. </p> <h1>Phoenix 1.3 - Chris McCord</h1> <p><strong>Event:</strong> Lonestar ElixirConf 2017</p> <p><strong>Speaker:</strong> Chris McCord, Phoenix Creator</p> <p><strong>Level:</strong> Beginner, Intermediate</p> <p><div style="text-align:center"><iframe width="560" height="315" src="https://www.youtube.com/embed/tMO28ar0lW8" frameborder="0" allowfullscreen></iframe></div><br/></p> <p>Chris, Phoenix creator, runs through the updates and developments in Phoenix 1.3 in his Lonestar ElixirConf keynote. As a result, this talk is a must-see to anyone working or developing with Phoenix. </p> <p>Learn about the design decisions behind the new generators. Chris also explains the rationale behind the new approach to structuring applications in Phoenix. </p> <p>PS watch out for Chris’ shoutout on being memed! The man has well and truly made it! </p> <h1>Phoenix: an Intro to Elixir&rsquo;s Web Framework - Sonny Scroggin</h1> <p><strong>Event:</strong> NDC London 2017</p> <p><strong>Speaker:</strong> Sonny Scroggin, Phoenix Core Team member</p> <p><strong>Level:</strong> Beginner, Intermediate</p> <p><div style="text-align:center"><iframe width="560" height="315" src="https://www.youtube.com/embed/F-7MX_Az6_4" frameborder="0" allowfullscreen></iframe></div><br/></p> <p>Phoenix, a web framework written in Elixir, provides the building blocks for creating fast, efficient, scalable web services. It supports the standard request/response model, but also provides support for WebSockets out of the box.</p> <p>In this talk, Sonny illustrates just how easy it is to get up and running with Phoenix by building an application, live!</p> <h1>Can Elixir Bring Down Phoenix? - Ben Marx</h1> <p><strong>Event:</strong> ElixirDaze 2017</p> <p><strong>Speaker:</strong> Ben Marx, Bleacher Report</p> <p><strong>Level:</strong> Beginner, Intermediate</p> <p><div style="text-align:center"><iframe width="560" height="315" src="https://www.youtube.com/embed/49Gw8DT5pEo" frameborder="0" allowfullscreen></iframe></div><br/></p> <p>The topic of Phoenix and Elixir continues. </p> <p>In this talk, from ElixirDaze 2017, Ben Marx pits Elixir against Phoenix. Against the backdrop of his work with Elixir in Phoenix in production at Bleacher Report (read our case study), Ben explains how he uses Elixir and Phoenix to service a massive number of concurrent users. </p> <h1>Elixir vs. Ruby fight - wroc_love.rb panel</h1> <p><strong>Event:</strong> Wroc_love.rb 2017</p> <p><strong>Speaker:</strong> 1. Hubert Łępicki, Michał Muskała, Andrzej Krzywda, Robert Pankowecki, Maciej Mensfeld</p> <p><strong>Level:</strong> Beginner, Intermediate</p> <p><div style="text-align:center"><iframe width="560" height="315" src="https://www.youtube.com/embed/O5dzsG5grK4" frameborder="0" allowfullscreen></iframe></div><br/></p> <p>Bonus! This panel from Warsaw’s wroc_love.rb 2017 conference poses a variety of Elixir, Erlang, and Ruby questions to their awesome foursome. This insightful conversation highlights the different ways people come to Elixir, and their thoughts of the language in the context of Erlang and Ruby. </p> <p><strong>Want more from the world of Elixir? We’re at <a href="http://bit.ly/2v1Aazb">Elixir.LDN</a> this week, and at <a href="http://bit.ly/2i5RQbF">ElixirConf 2017</a> in September.</strong> </p> <p>Learn more about our <a href="http://bit.ly/2w7OZEN">Elixir Consultancy</a> work, and keep up with the latest <a href="http://www2.erlang-solutions.com/emailpreference">Erlang and Elixir news</a> via our <a href="http://www2.erlang-solutions.com/emailpreference">newsletter</a>. </p>

Permalink

Alphabet Project, Part 9

  1. Alphabet Project, Part 1
  2. Alphabet Project, Part 2
  3. Alphabet Project, Part 3
  4. Alphabet Project, Part 4
  5. Alphabet Project, Part 5
  6. Alphabet Project, Part 6
  7. Alphabet Project, Part 7
  8. Alphabet Project, Part 8

Refactoring LilyPond output - Part Deux

I ended the last post with a list of nice-to-have improvements. To refresh both our memories:

  • attach the consonant phonemes to their parts
  • clean up repeated dynamics and add some hairpins as dynamics change
  • have dynamic and phoneme markup attached to the first note in a measure, not the first event, which might be a rest
  • work on being able to extract clean, legible parts for performers
  • try to clean up repeated 8th and 16th rests, though this is a tricky task while retaining some semblance of traditional score legibility (i.e. notating rests on the downbeats of natural measure subdivisions, rather than willy-nilly)

Today, I’m going to try to get through this list and end with a pretty, legible score with extracted parts.

Attach consonant phonemes

I’ve started here because this is by far the easiest task out of the lot. Currently our code looks like this:

def consonant_phoneme_modulation_points(letter) do
  PolyrhythmGenerator.ordered_coordinates(letter)
  |> Enum.sort_by(fn {_, y} -> y end)
  |> Enum.with_index
  |> Enum.filter(fn { {x, _}, i} -> rem(i, 20) == 0 || x == 0 end)
  |> Enum.zip(Stream.cycle(["ei", "i:", "ai", "ou", "u:"]))
  |> Enum.map(fn { { {x, _}, _}, vowel} -> {x, vowel} end)
  |> Enum.sort
end

The important lines for our purposes today are

# zip the selected measure indices with their respective phonemes
|> Enum.zip(Stream.cycle(["ei", "i:", "ai", "ou", "u:"]))
# return tuples of {index, phoneme}
|> Enum.map(fn { { {x, _}, _}, vowel} -> {x, vowel} end)

All we need to do is update the return tuple in the last line there to prepend the consonant phoneme to the vowel.

In the American English International Phonetic Alphabet, most English consonants are represented by their own character, so we only need to handle a few small cases when mapping between them. So let’s add a function to do that

def consonant_phoneme_for(letter) do
  Map.get(%{
    "c" => "k", "j" => "d͡ʒ", "q" => "kʰ", "r" => "ɹ", "x" => "ks", "y" => "j"
  }, letter, letter)
end

If the letter we pass in to consonant_phoneme_for/1 is a key in the map, it returns the value for that key, otherwise, we can use the letter character itself, so we just return that. Now we can easily plug this in to consonant_phoneme_modulation_points/1 like so

def consonant_phoneme_modulation_points(letter) do
  # Find the appropriate consonant character
  with consonant_phoneme <- consonant_phoneme_for(letter) do
    PolyrhythmGenerator.ordered_coordinates(letter)
    |> Enum.sort_by(fn {_, y} -> y end)
    |> Enum.with_index
    |> Enum.filter(fn { {x, _}, i} -> rem(i, 20) == 0 || x == 0 end)
    |> Enum.zip(Stream.cycle(["ei", "i:", "ai", "ou", "u:"]))
    # And prepend it to the vowel before returning
    |> Enum.map(fn { { {x, _}, _}, vowel} -> {x, consonant_phoneme <> vowel} end)
    |> Enum.sort
  end
end

alphabet-part-9/page1_consonants.png

And there we go! The vowel parts remain the same, but the consonant parts also print their consonant phoneme.

Onwards!

Clean up repeated dynamics

Let’s look at the last page of the score:

alphabet-part-8/page78_with_tuplet_text.png

Every measure has a dynamic attached to it, even when that dynamic is the same as the measure before. Traditionally, we would only display the dynamic when it changes, and doing so here will make the score look cleaner.

All we need to do is compare each measure with the measure before it, and, if it has the same dynamic, remove it so it won’t print.

My first attempt at this was a very Elixir-esque, recurse-with-an-accumulator:

# start off with the first measure in the accumulator
# and its dynamic as the `current_dynamic`
def clean_dynamics([m|ms]), do: clean_dynamics(ms, m.dynamic, [m])

# if there are 0 or 1 measures left, we don't need to worry about
# the next measure, so we just reverse the accumulator and return
def clean_dynamics([], _, acc), do: Enum.reverse(acc)
def clean_dynamics([m], _, acc), do: Enum.reverse([m|acc])
# otherwise
def clean_dynamics([m|ms], current_dynamic, acc) do
  # if the next measure has the same dynamic
  # as the `current_dynamic`, update the measure to
  # delete the dynamic
  case m.dynamic == current_dynamic do
    true ->
      new_m = %Measure{ m | dynamic: nil}
      clean_dynamics(ms, current_dynamic, [new_m|acc])
    # if the dynamic is different, leave the dynamic
    # on the measure and set the new `current_dynamic`
    false ->
      clean_dynamics(ms, m.dynamic, [m|acc])
  end
end

But then I read a blog post1 that mentioned how, while this is indeed a very Erlang/Elixir way of doing things, the Enum module often provides sufficient functionality to be able to accomplish the same work in a single Enum.map loop, so I thought I’d give it a try:

def clean_dynamics(measures = [m|_]) do
  # create 2-element, overlapping chunks for each measure + the following measure
  # passing :discard ensures that any incomplete chunk
  # (for example, the last measure by itself)
  # is ignored. This saves us having to explicitly remove it later.
  new_measures = Enum.chunk_every(measures, 2, 1, :discard)
  |> Enum.map(fn [m1, m2] ->
       # compare the dynamics for the two measures
       case m1.dynamic == m2.dynamic do
         # if they're the same, delete the second measure's dynamic
         # and return it
         true -> %Measure{ m2 | dynamic: nil }
         # otherwise, just return the second measure unchanged
         false -> m2
       end
  end)
  # since the map above returns the second element of each chunk,
  # we've lost the very first measure, so we push that
  # back on the front of the measures and return them all
  [m|new_measures]
end

As far as code size goes, the second solution is definitely shorter, and does reduce a bit of the mental overhead required to parse it, since it’s only one function. Admittedly, there’s something fun about tail recursion, but since I’d like to be able to read this code again in the future, and since they both return the same results, let’s stick with the second, shorter solution. Thanks for the tip, forgotten blog author!

alphabet-part-9/page71_dynamics.png

No repeated dynamics! Well, there are still a few, but they only reappear after an entire measure of rest (see the bottom line). This is acceptable, since after rests it can be helpful to re-state the dynamic.

Add hairpins

To give the piece a bit of movement, and, honestly, make the score a bit more interesting to read, I want to add some crescendos and decrescendos when the dynamics shift. My plan is to add a full measure dynamic to the measure before a dynamic change.

Because we need to know the last printed dynamic in the piece, we need to keep an accumulator of sorts for dynamic to keep track of the last non-nil value. To do this, we need to return to the tail recursion approach we tried and rejected above:

# call to a private method, also passing along the current dynamic and
# an empty measure accumulator
def add_hairpins(measures = [m|_]), do: _add_hairpins(measures, m.dynamic, [])

# if the list has 0 or 1 elements remaining (we will never draw a hairpin on
# the final measure), we can add anything remaining to the front
# of the accumulator, reverse it, and return it
defp _add_hairpins(l, _, acc) when length(l) <= 1, do: Enum.reverse(l ++ acc)
# otherwise, if there are at least 2 elements remaining
defp _add_hairpins([m,m2|ms], current_dynamic, acc) do
  # if the 2nd measure has no dynamic, we keep the same current dynamic
  # otherwise we set that measure's dynamic to pass for the next iteration
  next_dynamic = case m2.dynamic do
    nil -> current_dynamic
    d -> d
  end
  # a shortcut value to check whether either measure is entirely composed of rests
  # we don't need to print a hairpin over or pointing towards rests
  either_measure_all_rests = Measure.all_rests?(m) || Measure.all_rests?(m2)
  new_m = cond do
    # if the second measure's dynamic is nil, the dynamic doesn't change
    # so we don't need to print a hairpin
    m2.dynamic == nil -> m
    # if the next measure's dynamic is less than the current dynamic
    # we need to print a decrescendo over the first measure
    dynamic_index(m2.dynamic) < dynamic_index(current_dynamic) && not either_measure_all_rests ->
      %Measure{m | hairpin: ">" }
    # if the next measure's dynamic is greater than the current dynamic
    # we need to print a crescendo over the first measure
    dynamic_index(m2.dynamic) > dynamic_index(current_dynamic) && not either_measure_all_rests ->
      %Measure{m | hairpin: "<" }
    # any other case doesn't require any change
    true -> m
  end
  # push the new measure onto the front of the accumulator, and recurse
  _add_hairpins([m2|ms], next_dynamic, [new_m|acc])
end

Then we also need to make a couple small changes to our Measure struct to print out these hairpins:

defmodule Measure do
  ...
  def events_to_lily(measure = %__MODULE__{}) do
    with  [h|t] <- reduce(measure).events |> add_beaming(),
-         h <- h <> dynamic_markup(measure) <> phoneme_markup(measure)
+         h <- h <> dynamic_markup(measure) <> hairpin_markup(measure) <> phoneme_markup(measure)
    do
      [h|t] |> Enum.join(" ")
    end
  end

+ def hairpin_markup(%__MODULE__{hairpin: nil}), do: ""
+ def hairpin_markup(%__MODULE__{hairpin: hairpin}), do: "\\#{hairpin}"
end

And…

alphabet-part-9/page71_hairpins.png

Great! But there’s always one more thing, isn’t there?

Yes there is, and in this case it’s that we’re always attaching dynamics and phonemes to the first event of a measure, whether or not it’s a rest:

alphabet-part-9/rest_attachments.png

Rests can have neither dynamics nor pronunciation2, and so it makes little sense to attach such markup to them. Right now we’re attaching markup with this code

def events_to_lily(measure = %__MODULE__{}) do
  with  [h|t] <- reduce(measure).events |> add_beaming(),
        h <- h <> dynamic_markup(measure) <> hairpin_markup(measure) <> phoneme_markup(measure)
  do
    [h|t] |> Enum.join(" ")
  end
end

where we deconstruct the list of events and write all the markup to the head of the list. Instead, we want to find the first event that isn’t a rest and attach to that. Let’s see what that looks like

def events_to_lily(measure = %__MODULE__{}) do
  with events <- reduce(measure).events,
       # find the first note in the measure
       first_note <- Enum.find(events, fn e -> not Regex.match?(~r/^r/, e) end),
       # find the index of the first note
       first_note_index = Enum.find_index(events, &(&1 == first_note))
  do
    # add markup to that note
    marked_up_note = first_note <> dynamic_markup(measure) <> hairpin_markup(measure) <> phoneme_markup(measure)
    # replace the un-marked-up note with its marked up version
    new_events = List.replace_at(events, first_note_index, marked_up_note) |> add_beaming()
    Enum.join(new_events, " ")
  end
end

alphabet-part-9/note_attachments.png

Hey, much better! Except, where did our beaming go? Let’s look at the code that generates the beaming

def add_beaming(events) do
  case Enum.all?(events, &Regex.match?(~r/(8|16)\.?$/, &1)) do
    true -> events |> List.insert_at(1, "[") |> List.insert_at(-1, "]")
    false -> events
  end
end

There’s the issue in the case statement. We’re making sure all the events in the measure can be beamed (i.e. are 8th or 16th notes), by checking that each measure event ends with an 8th/16th note (dotted or otherwise). But now that we’re adding the markup before we check for beaming, the markup keeps the event string from ending with the duration notation, thus causing our case statement to always return false, and thus, no beaming.

Fortunately, this is an easy fix. We can just change the Regex match statement to check for the presense of the correct duration notation anywhere in the event string, rather than just at the end

def add_beaming(events) do
- case Enum.all?(events, &Regex.match?(~r/(8|16)\.?$/, &1)) do
+ case Enum.all?(events, &Regex.match?(~r/(8|16)\.?/, &1)) do
    true -> events |> List.insert_at(1, "[") |> List.insert_at(-1, "]")
    false -> events
  end
end

alphabet-part-9/with_bindings.png

And there we go!

That’s 3/5 of the remaining issues I listed at the beginning of the article. Extracting parts will be the topic of the next post, and cleaning up repeated 8th and 16th notes is a task I’m likely to pass on for now, the reasoning behind which will be a brief subtopic in the next post. So thanks for reading, and stay tuned!

The code, as it exists by the end of this post, can be found on Github here.



Thanks for reading! If you liked this post, and want to know when the next one is coming out, follow me on Twitter (link below)!



  1. I’m pretty sure it was by José Valim or Chris McCord, either of whom definitely know what they’re talking about as far as Elixir is concerned, but I can’t track it down again. 

  2. Though I’m sure there are composers out there who have determined they do in the context of a piece. This is not such a piece. 

Permalink

5 Elixir tricks you should know

alias __MODULE__

This one looks mysterious at first, but once we break it down, it's very straightforward.

alias allows you to define aliases for the module name, for example:

alias Foo.Bar will set up an alias for module Foo.Bar, and you can reference that module with just Bar.

__MODULE__ is a compilation environment macros which is the current module name as an atom.

Now you know alias __MODULE__ just defines an alias for our Elixir module. This is very useful when used with defstruct which we will talk about next.

In the following example, we pass API.User struct around to run some checks on our data. Instead of writing the full module name, we set up an alias User for it and pass that around. It's pretty concise and easy to read.

1
2
3
4
5
6
7
8
9
defmodule API.User do
  alias __MODULE__

  defstruct name: nil, age: 0

  def old?(%User{name: name, age: age} = user) do
    ...
  end
end

In case of module name changing, you can also do this:

1
alias __MODULE__, as: SomeOtherName

defstruct with @enforce_keys

Whenever you want to model your data with maps, you should also consider struct because struct is a tagged map which offers compile time checks on the key and allows us to do run-time checks on the struct's type, for example:

you can't create a struct with field that is not defined. In the following example you can also see how we apply the first trick we just learned.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
defmodule Fun.Game do
  alias __MODULE__
  defstruct(
    time: nil,
    status: :init
  )

  def new() do
    %Game{step: 1}
  end
end

iex> IO.inspect Fun.Game.new()
iex> ** (KeyError) key :step not found in: %{__struct__: Fun.Game, status: :init, time: nil}

However, sometimes you want to ensure that some fields are present whenever you create a new struct. Fortunately, Elixir provides @enforce_keys module attribute for that:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
defmodule Fun.Game do
  @enforce_keys [:status]

  alias __MODULE__
  defstruct(
    time: nil,
    status: :init
  )

  def new() do
    %Game{}
  end
end

iex> Fun.Game.new()
iex> ** (ArgumentError) the following keys must also be given when building struct Fun.Game: [:status]

Based on the result, you can see that in this case we can't rely on the default value of status, we need to specify its value when we create a new Game:

1
2
3
4
5
6
def new(status) do
  %Game{status: status}
end

iex> Fun.Game.new(:won)
iex> %Fun.Game{status: :won, time: nil}

v() function in iex

Whenever I write a GenServer module, I usually want to start the server and check the result in iex. One thing that really bothers me is that I almost always forget to pattern match the process pid, like this:

1
2
iex(1)> Metex.Worker.start_link()
{:ok, #PID<0.472.0>}

then, I need to type that command again with pattern matching:

1
{:ok, pid} = Metex.Worker.start_link()

Being tired of doing this over and over again, I found that you can use v() to return the result from last command:

1
2
3
4
5
6
iex(1)> Metex.Worker.start_link()
{:ok, #PID<0.472.0>}
iex(2)> {:ok, pid} = v()
{:ok, #PID<0.472.0>}
iex(3)>  pid
#PID<0.472.0>

This trick saves me couple of seconds every time I use it, I hope that you will find it helpful too.

cancel bad command in iex

Have you ever had this kind of moment when you use iex:

1
2
3
4
5
6
iex(1)> a = 1 + 1'
...(2)>
...(2)>
...(2)>
BREAK: (a)bort (c)ontinue (p)roc info (i)nfo (l)oaded
       (v)ersion (k)ill (D)b-tables (d)istribution

Normally, I will ctrl + c twice to exit iex and create a new one. However, sometimes you've already typed in a bunch of commands, and you definitely want to keep the session. Here is what you can do: #iex:break

1
2
3
4
5
6
7
8
iex(2)> a = 1 + 1
iex(2)> b = 1 + 1'
...(2)>
...(2)> #iex:break
** (TokenMissingError) iex:1: incomplete expression

iex(2)> a
2

From the code block above, you can see that we still have the session after canceling a bad command.

bind value to an optional variable

I'm sure most of people know that you can bind a value to an optional variable like this:

1
_dont_care = 1

The reason why I bring this up is because we can actually apply this trick to our functions to make them more readable:

1
2
3
4
5
6
7
defp accept_move(game, _guess, _already_used = true) do
  Map.put(game, :state, :already_used)
end
defp accept_move(game, guess, _not_used) do
  Map.put(game, :used, MapSet.put(game.used, guess))
  |> score_guess(Enum.member?(game.letters, guess))
end

Thanks for reading this post and always share your Elixir tricks with the community.

Permalink

The guts of a property testing library

Property-based testing is a common tool to improve testing by testing properties of a piece of software over many values drawn at random from a large space of valid values. This methodology was first introduced in the paper QuickCheck: A Lightweight Tool for Random Testing of Haskell Programs, which describes the basic idea and shows a possible implementation in Haskell. Since then, many tools to aid in property based testing appeared for many programming languages: as of the time of writing, there’s libraries for Haskell, Erlang, Clojure, Python, Scala, and many others. A few days ago I released the first version of StreamData, a property testing (and data generation) library for Elixir (that is a candidate for inclusion in Elixir itself in the future). This post is not an introduction to property-based testing nor a tutorial on how to use StreamData: what I want to do is dig into the mechanics of how StreamData works, its design, and how it compares to some of the other property-based testing libraries mentioned above.

Permalink

Alphabet Project, Part 8

  1. Alphabet Project, Part 1
  2. Alphabet Project, Part 2
  3. Alphabet Project, Part 3
  4. Alphabet Project, Part 4
  5. Alphabet Project, Part 5
  6. Alphabet Project, Part 6
  7. Alphabet Project, Part 7

Refactoring LilyPond output - formatting measures

In the last post I refactored code around representing measures as structs, rather than anonymous tuples. That version of the code is on Github here.

Beaming

The goal of that refactor was to clean up the code base, not to make any changes to the resulting output, but I did end up making one small change. In Measure.events_to_lily/ I added a call to Measure.add_beaming/1

defmodule Measure do
  def events_to_lily(measure = %__MODULE__{events: [h|t]}) do
    with h <- h <> dynamic_markup(measure) <> phoneme_markup(measure) do
      [h|t] |> add_beaming() |> Enum.join(" ")
    end
  end

  def add_beaming(events) do
    events |> List.insert_at(1, "[") |> List.insert_at(-1, "]")
  end
end

All this does is insert [ as the second element of the events list, and ] as the last element. When this is compiled by LilyPond, this ensures that all the notes in a measure will be beamed together. To illustrate, this is the difference between

alphabet-part-8/unbeamed.png

and alphabet-part-8/beamed.png

which I think looks rather nicer.

Full measure rests

However, on a less aesthetic note, this is also the difference between

alphabet-part-8/unbeamed_rests.png

and alphabet-part-8/beamed_rests.png

Clearly the beaming doesn’t work as well here. But more importantly, in both cases we should be able to display a full measure rest, instead of a tuplet made of up individual rests.

This is a small change in the Elixir code. We can add a function to check whether every event in a measure is a rest (all_rests?/1), and then case on the value of that function call to either print the events as usual, or use LilyPond’s full measure rest syntax to print out the appropriate rest symbol for the measure.

defmodule Measure do
  def all_rests?(%__MODULE__{events: events}) do
    Enum.all?(events, &(&1 == "r8"))
  end

  def to_lily(measure = %__MODULE__{time_signature: {n, d}}) do
    case all_rests?(measure) do
      false -> "  \\time #{n}/#{d} #{events_to_lily(measure)}"
      true  -> "  \\time #{n}/#{d} R8 * #{n}"
    end
  end
  def to_lily(measure = %__MODULE__{tuplet: {n, d}}) do
    case all_rests?(measure) do
      false -> "  \\tuplet #{n}/#{d} { #{events_to_lily(measure)} }"
      true  -> "  R8 * #{d}"
    end
  end
end

This gives us the much more preferable

alphabet-part-8/full_measure_rest.png

Printing durations

So far every printed note has been an 8th note. While this works for getting the basic layout of the piece set, it’s hardly ideal for a final printed score. Let’s look at the opening measure

alphabet-part-8/m1_before.png

Already we see a few odd ratios: 60:30, 3:30, 7:30, 10:30, and so on. In many cases these are reducible, but beyond that, the tuplet should not be using 8th notes.

There are two issues it would behoove us to solve here:

  1. find measures in which the tuplet divides evenly into the measure and “detupletify”
  2. for measures that require tuplets, but for which the ratio should not be using 8th notes, and pick a more suitable duration

Issue 2 is slightly easier to work out, and helps us on our way to solving issue 1, so we’ll start there:

Correcting notated tuplet durations

When notating tuplets in music, and even more commonly now as rhythms in contemporary music become more complex, there is some flexibility. For example, dividing a quarter note beat into quintuplets: should they be notated as 5 sped up 16th notes, or 5 slowed down 32 notes? In almost all cases, composers and performers would prefer the 16th notes, but the possibilty of alternate notations does remain.

To eliminate some of the subjectivity, or, rather, to render my general subjectivity into code, I’m going to say that, for each tuplet, we will

  • find the exact, probably fractional, number of 8th notes that would be required to fit the tuplet, let’s call this value x
  • round x based on standard float rounding (0.5 and higher rounds up, everything else rounds down)
  • this gives us the base count of 8th notes to use per tuplet event
  • find the closest untied, notateable duration given that 8th note count
  • renotate the measure using this duration

Addendum: if x would round to 0, the ratio is greater than 2:1, in which case we should use 16th notes against 8th notes

An example:

Given the ratio 7:30, we want to find the untied musical duration that best fits into 30 seven times.

30 / 7 = 4.285... which rounds to 4 8th notes, or a half note. So the measure would be rewritten using 7 half notes instead of 7 8th notes.

In code, this looks something like this:

defmodule Measure do
  defstruct [
    :time_signature, :tuplet, :events,
    :dynamic, :phoneme, :written_duration
  ]

  ...

  def set_proper_duration(measure = %Measure{time_signature: {n, d}, tuplet: nil}) do
    %Measure{ measure | written_duration: "8" }
  end
  def set_proper_duration(measure = %Measure{tuplet: {n, d})) do
    with x <- round(d, n) do
      duration = case x do
        0 -> "16"
        1 -> "8"
        2 -> "4"
        3 -> "4."
        n when n in [4, 5] -> "2"
        6 -> "2."
        7 -> "2.."
        _ -> "1"
      end
      %Measure{ measure | written_duration: duration }
    end
  end
end

Here we find the ratio for the tuplet, and based on our decisions above, pick the most appropriate untied duration. If the time signature is set instead of the tuplet – indicating the pulse part – we return “8”. I’ve also added the :written_duration attribute to the Measure struct, and we store this duration in that attribute to make it more easily accessible.

Now we need to update Measure.to_lily/1 to make sure we use this new duration:

def replace_durations(measure = %__MODULE__{events: events, tuplet: nil}), do: measure
def replace_durations(measure = %__MODULE__{tuplet: {_, _}}) do
  with measure <- set_proper_duration(measure) do
    new_events = Enum.map(measure.events, fn e ->
      Regex.replace(~r/\d+$/, e, measure.written_duration)
    end)
    %Measure{ measure | events: new_events }
  end
end

Simple enough! Let’s see what this looks like:

alphabet-part-8/page1_problematic.png

Huh. Well, I won’t pretend I don’t think that looks pretty cool. But, it’s not really what we’re going for, so let’s figure out what went wrong.

Turns out it’s not too difficult to suss out the issue. Let’s take a closer look at the 8th and 9th staves:

alphabet-part-8/h+i_problematic.png

If we look at the numbers above the staves in the 2nd measure, we see 7 and 10, which are the tuplet values for these parts. But, the tuplet ratios are still 7:30 and 10:30, which were specifically for 8th notes. Now, in the top line, we’re trying to fit 7 half notes into the space of 30 half notes, when instead we want to fit 7 half notes into the space of 30 8th notes! No wonder everything looks off!

Fortunately, this is simple to fix. Based on the division and rounding done in set_proper_duration/1 above, we know how many 8th notes are in each new notated tuplet event, so all we have to do is multiply the numerator of the tuplet by that number. For example:

Our original ratio of 7 8th notes into 30 8th notes in now 7 half notes into 30 eigth notes. Each half note contains 4 8th notes, so we transform the tuplet into (7 * 4):30 = 28:30. In order to do this, we should also store the multiplicand on the measure as well:

defmodule Measure do
  defstruct [
    :time_signature, :tuplet, :events,
    :dynamic, :phoneme, :written_duration,
    :eigth_notes_per_duration
  ]
  ...

  def set_proper_duration(measure = %Measure{time_signature: {_, _}, tuplet: nil}) do
    %Measure{ measure | written_duration: "8", eigth_notes_per_duration: 1 }
  end
  def set_proper_duration(measure = %Measure{tuplet: {n, d}}) do
    with x <- round(d / n) do
      {x, duration} = case x do
        0 -> {x, "16"}
        1 -> {x, "8"}
        2 -> {x, "4"}
        3 -> {x, "4."}
        n when n in [4, 5] -> {4, "2"}
        6 -> {x, "2."}
        7 -> {x, "2.."}
        _ -> {8, "1"}
      end
      %Measure{ measure | written_duration: duration, eigth_notes_per_duration: x }
    end
  end
end

alphabet-part-8/page1_better.png

There we go! Except that some of those 8th notes really should be 16th notes, but they’re not. And that’s because my math is bad. When rounding, I said that a rounded value of 0 should return 16th notes, but 8/16 is exactly 0.5, which gets rounded up to 1, which means we stay using 8th notes.

With just a couple of small code changes, to account for being able to map to 16th notes, and to round the tuplet numerator properly:

defmodule Measure do
  ...
  def _to_lily(measure = %__MODULE__{tuplet: {n, d}, eigth_notes_per_duration: e}) do
    case all_rests?(measure) do
      true  -> "  R8 * #{d}"
      false -> "  \\tuplet #{round(n * e)}/#{d} { #{events_to_lily(measure)} }"
    end
  end

  ...

  def set_proper_duration(measure = %Measure{time_signature: {_, _}, tuplet: nil}) do
    %Measure{ measure | written_duration: "8" }
  end
  def set_proper_duration(measure = %Measure{tuplet: {n, d}}) when d / n <= 0.5 do
    %Measure{ measure | written_duration: "16", eigth_notes_per_duration: 0.5 }
  end
  def set_proper_duration(measure = %Measure{tuplet: {n, d}}) do
    with x <- round(d / n) do
      {x, duration} = case x do
        0 -> {x, "16"}
        1 -> {x, "8"}
        2 -> {x, "4"}
        3 -> {x, "4."}
        n when n in [4, 5] -> {4, "2"}
        6 -> {x, "2."}
        7 -> {x, "2.."}
        _ -> {8, "1"}
      end
      %Measure{ measure | written_duration: duration, eigth_notes_per_duration: x }
    end
  end
end

alphabet-part-8/page1_with_16ths.png

Great! There’s only one more thing that stands out as affecting the readability of the score, and that’s the text of the tuplets. In the fourth staff, we have a tuplet of 24:30, but it is notated with 3 whole notes. As calculated above, the tuplet ratio is between 8th notes on both sides, but when the notation uses whole notes, 24 is less descriptive than we might like. Fortunately, LilyPond lets us put just about anything we want in the tuplet text, so let’s go ahead and do that.

I also took this chance to address issue #1 from a ways back in this post: find measures in which the tuplet divides evenly into the measure and “detupletify” In those cases, we can print the proper generated duration without needing the tuplet markup, as you can see in the score below in the second and ninth staves.

  def _to_lily(measure = %__MODULE__{tuplet: {n, d}, eigth_notes_per_duration: e}) do
    case all_rests?(measure) do
      # All rests, print a full measure rest
      true  -> "  R8 * #{d}"
      false ->
        with ratio <- round(n * e) / d do
          case round(ratio) == ratio do
            # The ratio does not to be tupleted, no extra markup necessary
            true -> events_to_lily(measure)
            # Generate a descriptive tuplet text mark
            false ->
              "  \\once \\override TupletNumber #'text =\n" <>
              "    #(tuplet-number::non-default-fraction-with-notes #{n} \"#{measure.written_duration}\" #{d} \"8\")"
              <> "\n" <>
              "  \\tuplet #{round(n * e)}/#{d} { #{events_to_lily(measure)} }"
          end
        end
    end
  end

alphabet-part-8/page1_with_tuplet_text.png

The changes here are even more apparent on the last page of the score, where several measures of 4:2 tuplets have been renotated as 4 untupletd 16th notes:

alphabet-part-8/page78_with_tuplet_text.png

I think that’s enough for one post, but here’s a list of what I want to tackle next time:

  • attach the consonant phonemes to their parts
  • clean up repeated dynamics and add some hairpins as dynamics change
  • have dynamic and phoneme markup attached to the first note in a measure, not the first event, which might be a rest
  • work on being able to extract clean, legible parts for performers
  • try to clean up repeated 8th and 16th rests, though this is a tricky task while retaining some semblance of traditional score legibility (i.e. notating rests on the downbeats of natural measure subdivisions, rather than willy-nilly)

The code, as it exists by the end of this post, can be found on Github here.



Thanks for reading! If you liked this post, and want to know when the next one is coming out, follow me on Twitter (link below)!

Permalink

Copyright © 2016, Planet Erlang. No rights reserved.
Planet Erlang is maintained by Proctor.