Mike Schaeffer's Blog

Articles with tag: clojure
June 5, 2023

If you've been around programming for a while you've no doubt come across the Lisp family of languages. One of the oldest languages still in use, Lisp has contributed much to the profession, but it's probably most infmamous for the "S-expression". S-expressions are a text based serialization of the languages core-data structures. Since Lisp is written in terms of those same data structures, the S-expression is also the syntax of the langauge.

To give you a taste if you're not familar, here's a simple Clojure function for parsing an string identifier. The identifier is passed in as either a numeric string (123) or a hash ID (tlXyzzy), and the function parses either form into a number.

(defn decode-list-id [ list-id ]
  (or (try-parse-integer list-id)
      (hashid/decode :tl list-id)))

In a "C-Like Langauge", the same logic looks more or less like this:

function decodeListId(listId) {
    return tryParseInteger(listId) || hashid::decode("tl", listId);
}

Right off the bat, you'll notice a heavy reliance on parenthesis to delimit logical blocks. With the exception of the argument list ([ list-id ]), every logical grouping in the code is delimited by parenthesis. You'll also notice the variable name (list-id) contains a hyphen - not allowed in C-like languages. I could point out more, but even stopping there, it's clear that Lisp syntax is unusual to modern eyes.

What may be even more unusual about this syntax is the fact that some people like it. I count myself among them. It's strange, but there are reasons for the strangeness. The strangeness, while it imposes costs, also offers benefits. It's these benefits that I wish to discuss.

Before I continue, I'd like to first credit Fernando Borretti's recent post on Lisp syntax. It's always good to see a defense of Lisp syntax, and I think his article nicely illustrates the way that the syntax of the langauage supports one of Lisp's other hallmark features: macros. If you haven't already read it, you should click that link and read it now. That said, there's more to the story, which is why I'm writing something myself.

If you've studied compilers, it's probably struck you how much of the first part of the source is spent on various aspects of language parsing. You'll study lexical analysis, which lets you divide streams of characters into tokens. Once you understand the basics of lexical analysis, you'll them study how to fold linear sequences of tokens into trees according to a grammar. Then, a few more tree transformations, and finally linearization back to a sequence of instructions for some more primitive machine. Lisp's syntax concerns the first two steps of this - lexical and syntactic analysis.

Lexical analysis for Lisp is very similar to lexical analysis for other languages. The main differences are the rules are a bit different. Lisp allows hyphens in symbols (see above), and other languages do not. This changes how the language looks, but isn't a huge structural advantage to Lisp's syntax:

(defn decodeListIid [ listId ]
  (or (tryParseInteger listId)
      (hashid/decode :tl listId)))

Where things get interesting for Lisp is in the syntactic analysis stage - the folding of linear lists of tokens into trees. One of the first parsing techniques you might learn while studying compilers is known as predictive recursive descent, specifically for LL(1) grammars. Without going into details, these are simple parsers to write by hand. The grammar of an LL(1) language can be mapped directly to collections of functions. Then, if there's a choice to be made during parsing, it can always be resolved by looking a single token ahead to predict the next rule you need to follow. These parsers have many limitations in what they can parse (no infix expressions), but they can parse quite a bit, and they're easy to write.

Do you see where this is going? Lisp falls into the category of languages that can easily be parsed using a recursive descent parser. Another way to put it is that it doesn't take a lot of sophistication to impart structure on a sequence of characters representing a Lisp program. While It is may be hard to write a C++ parser, it's comparatively easy to write one for Lisp. Thanks to the simple nature of a Lisp's grammar, the language really wears its syntax tree on its sleeve. This is and has been one of the key advantages Lisp derives from its syntax.

The first advantage is that simple parsing makes for simple tooling. If it's easier to write a parser for a language, it's easier to write external tools for that langauge that understand it in terms of its syntax. Emacs' paredit-mode is a good example of this. paredit-mode offers commands for interacting with Lisp code on the level of its syntactic structure. It lets you cut text based on subexpressions, swap subexpressions around, and similar sorts of operations based on the structure of the language. It is easier to write tools that operate on a langauge like this if the syntax is easily parsed. To see what I mean, imagine a form of paredit-mode for C++ and think how hard it would be to cut a subexpression there. What sorts of parsing capabilities would that command require, and how would it handle the case where code in the editor is only partially correct/

This is also true for human users of this sort of tooling. Lisp's simple grammar enables it to wear its structure on its sleeve for automatic tools, but also for human users of those tools. The properties of Lisp that make it easy for tools to identify a specific subexpression also make it easier for human readers of a block of code to identify that same subexpression. To put it in terms of paredit-mode, it's easier for human readers to understand what the commands of that mode will do, since the syntactic structure of the language is so much more evident.

A side benefit to a simple grammar is that simpler grammars are more easily extended. Fernando Boretti speaks to the power of Lisp macros in his article, but Common Lisp also offers reader macros. A reader macro is bound to a character or sequence of characters, and receives control when the standard Lisp reader encounters that sqeuence. The standard Lisp reader will pass in the input stream and allow the reader macro function to do what it wants, returning a Lisp value reflecting the content of what it read. This can be used to do things like add support for XML literals or infix expressions.

If the implications are not totally clear, Lisp's syntactic design is arguably easier for tools, and it allows easier extension to completely different syntaxes. The only constraint is that the reader macro has to accepts its input as a Lisp input stream, process somehow with Lisp code, and then return the value it "read" as a single Lisp value. It's very capable, and fits naturally into the simple lexical and syntactic structure of a Lisp. Infix languages have tried to be this extensible, but have largely failed, due to the complexity of the task.

Of course, the power of Lisp reader macros is also their weakness. By operating at the level of character streams (rather than Lisp data values) they make it impossible for external tools to fully parse Common Lisp source text. As soon as a Lisp reader macro becomes involved, there exists the possiblity of character sequences in the source text that are entirely outside the realm of a standard s-expression. This is like JSX embedded in JavaScript or SQL embedded in C - blocks of text that are totally foreign to the dominant language of the source file. While it's possible to add special cases for specific sorts of reader macros, it's not possible to do this in general. The first reader macro you write will break your external tools' ability to reason about the code that use it.

This problem provides a great example of where Clojure deviates from the Common Lisp tradition. Rather than providing full reader macros, Clojure offers tagged literals. Unlike a reader macro, a tagged literal never gets control over the reader's input stream. Rather, it gets an opportunity at read-time to process a value that's already been read by the standard reader. What this means is that a tagged literal process data very early in the compilation process, but it does not have the freedom to deviate from the standard syntax of a Clojure S-expression. This implies both flexibility to customize the reader and the ability for external tools to fully understand ahead of time the syntax of a Clojure source file, regardless of whether or not it uses tagged literals. Whether or not this is a good trade off might be a matter of debate, but it's in the context of a form of customization that most languages don't offer at all.

To be clear, there's more to the story. As Fernando Boretti mentions in his article, Lisp's uniform syntax extends across the language. A macro invocation looks the same as a special form, a function call, or a defstruct. Disambiguting between the various semantics of a Lisp form requires you to understand the context of the form and how symbols within that form are bound to meanings within this context. Put more simply, a function call and a macro invocation can look the same, even though they may have totally different meanings. This is a problem, and it's a problem that directly arises from the simplicity of Lisp syntax I extoll above. I don't have a solution to this problem other than to observe that if you're going to adopt Lisp syntax and face the problems of that syntax, you'd do well to fully understand and use the benefits of that syntax as compensation. Everything in engineering, as in life, is a tradeoff.

It's that last observation that's my main point. We live in a world where the mathematical tradition has, for centuries, been infix expressions. This has carried through to programming, that has also significantly converged on C-like syntax for its dominant languages. Lisp stands against both of these traditions in its choice of prefix expressions written in a simpler grammar than the norm. There are costs to this choice, and these costs tend to be immediately obvious. There are also benefits, and these benefits take time to make themselves known. If you have that time, it can be a a rewarding thing to explore the possibilities, even if you never get the chance to use them directly in production.

August 12, 2020

Like a lot of engineers, I have a handful of personal projects I keep around for various reasons. Some are useful and some are just for fun, but none of them get the same sort of investment as a funded commercial effort. The consequence of this is that it's all the more important to keep things as simple as possible, to focus the investment where it counts. Part of the way I achieve that is that I've spent some initial time putting together a standard packaging approach. I know, I know - "standard packaging approach" doesn't sound like "fun personal project" - but getting the packaging out of the way makes it easier to focus on the actual fun part - building out functionality. It's for that reason that I've also successfully used variants of this approach on smaller commercial projects too. Hopefully, this will be useful to you too.

Setting the state, the top level view is this:

  • Uberjar packaging of single binaries using Leiningen and a few plugins.
  • Standard scripts and tools for packaging and install.
  • Use of existing Linux mechanisms for service control.
  • A heavy tendancy toward 12 Factor principles.

What this gets you is a good local interactive development story and easy deployment to a server. I've also gotten it to work with Client side code too, using Figwheel

What it doesn't get you is direct support for large numbers of processes or servers. Modern hardware is fast and capable, so you may not have those requirements, but if you do, you'll need something heavier weight, to reduce both management overhead and costs. (In my day job, we've done some amazing things with Kubernetes.)

The example project I'm going to use is the engine for this blog, Rhinowiki. It's useful, but simple enough to be used as a model for a packaging approach. If you're also interested in strategies for managing apps with read/write persistance (SQL) and rich client code, I have a couple other programs packaged this way with those features. Even with these, the essentials of the packaging strategy are exactly the same as what I describe here:

Everything begins with a traditional project.clj, and the project can be started locally with the usual lein run.

Once running, main immediately writes a herald log message at info level:

(defn -main [& args]
  (log/info "Starting Rhinowiki" (get-version))
  (let [config (load-config)]
    (log/debug "config" config)
    (webserver/start (:http-port config)
                     (blog/blog-routes (blog/blog-init config)))
    (log/info "end run.")))

This immediately lets you know the process has started, logs are working, and which version of the code is running. These are usually the first things verified after an install, so it's good to ensure they happen early on. This is particularly useful for software that's not interactive or running on slow hardware. (I've run some of this code on Raspberry Pi hardware that takes ten or so seconds to get to the startup herald.)

The way the version is acquired is interesting too. The call to get-version is really a macro invocation and not a function call.

(defmacro get-version []
  ;; Capture compile-time property definition from Lein
  (System/getProperty "rhinowiki.version"))

Because macros are evaluated at compile time, the macroexpansion of get-version has access to JVM properties defined at build time by Leiningen.

The next step is to pull in configuration settings using Anatoly Polinsky's https://github.com/tolitius/cprop library. cprop can do more than what I use it for here, but here, I use it to load a single EDN config file. cprop lets the name of that file be identified at startup via a system proprety, making it possible to define a file for each server, as well as a local config file specified in: project.clj.

:jvm-opts ["-Dconf=local-config.edn"]

I've also found it useful to minimize the number and power of configuration settings. Every setting that changes is a risk that something will break when you promote code. Every setting that doesn't change is a risk of introducing a bug in the settings code.

I also dump the configugration to a log very early in the startup process.

(log/debug "config" config)

Given the importance of configuration settings, it's occasionally important to be able to inspect the settings in use at run-time. However, this log is written at debug level, so it doesn't normally print. This reduces the risk of accidentally revealing secret keys in the log stream. Depending on the importance of those keys, there is also much more you can do to protect them, if preventing the risk is worth the effort.

After all that's done, main transfers control over to the actual application:

(webserver/start (:http-port config)
                 (blog/blog-routes (blog/blog-init config)))

With a configurable application running, the next step is to get it packaged in a way that lets us predictably install it elsewhere. The strategy here is a two step approach: build the code as an uberjar and include the uberjar in a self-contained .tar.gz as an installation pacakge.

  • The installer package contains everything needed to install the software (the one exception being the JVM itself).
  • The package name includes the version number of the software: rhinowiki-0.3.3.tar.gz.
  • Files in the installation package all have a prefix (rhinowiki-install, in this case) to confine the installation files to a single directory when installing. This is to make it easy to avoid crosstalk between multiple installers and delete installation directories after you're done with an installation.
  • There is an idempotent installation script (install.sh) at the root of the package. Running this script either creates or updates an installation.
  • The software is installed as a Linux service.

The net result of this packaging in an installation/upgrade process that works like this:

tar xzvf rhinowiki-0.3.3.tar.gz
cd rhinowiki-install
sudo service rhinowiki stop
sudo ./install.sh
sudo service rhinowiki start

To get to this point, I use the Leiningen release task and the lein-tar plugin, both originally by Phil Hagelberg. There's a wrapper script, but the essential command is lein clean && lein release $RELEASE_LEVEL. This instructs Leiningen to execute a series of tasks listed in the release-tasks key in project.clj.

I've had to modify Leiningen's default list of release tasks, in two ways: I skip signing of tagged releases in git, and I invoke lein-tar rather than deploy. However, the full task list needs to be [completely restated in project.clj](https://github.com/mschaef/rhinowiki/blob/master/project.clj#L42), so it's a lengthy setting.

:release-tasks [["vcs" "assert-committed"]
                ["change" "version" "leiningen.release/bump-version" "release"]
                ["vcs" "commit"]
                ["vcs" "tag" "--no-sign" ]
                ["tar"]
                ["change" "version" "leiningen.release/bump-version"]
                ["vcs" "commit"]
                ["vcs" "push"]]

The configuration for lein-tar is more straightforward - include the plugin, and specify a few options. The options request that the packaged output be written in the project root, include an uberjar, and extract into an install directory rather than just CWD.

:plugins [[lein-ring "0.9.7"]
          [lein-tar "3.3.0"]]

;; ...

:tar {:uberjar true
      :format :tar-gz
      :output-dir "."
      :leading-path "rhinowiki-install"}

Give the uberjar a specific fixed name:

:uberjar-name "rhinowiki-standalone.jar"

And populate it with a few files additional to the uberjar itself - lein-tar accepts these files in pkg/ at the root of the project directory hierarchy. These files include everything else needed to install the application - a configuration map for cprop, an install script, a service script, and log configuration.

The install script is the last part of the process. It's an idempotent script that, when run on a server as sudo, guarantees that the application is installed. It sets up users and groups, copies files from the package to wherever they belong, and uses update-rc.d to ensure that the service scripts are correctly installed.

This breaks down the packaging and installation process to the following:

  • ./package.sh
  • scp package tarball to server and ssh in
  • Extract the package - tar xzvf rhinowiki-0.3.3.tar.gz
  • Change into the expanded package directory - cd rhinowiki-install
  • Stop any existing instances of the service - sudo service rhinowiki stop
  • Run the install script - sudo ./install.sh
  • (Re)Start the service - sudo service rhinowiki start

At this point, I've sketched out the approach end to end, and I hope it's evident that this can be used in fairly simple scenarios. Before I close, let me also talk about a few sharp edges to be aware of. Like every other engineering approach, this packaging strategy has tradeoffs, and some of these tradeoffs require specific compromises.

The first is that this approach requires dependencies (notably the JVM) to be manually installed on target servers. For smaller environments, this can be acceptable, for larger numbers of target VM's, almost definately not.

The second is that there's nothing about persistance in this approach. It either needs to be managed externally, or the entire persistance story needs to be internal to the deployed uberjar. This is why I wrote sql-file, which provides a built in SQL database with schema migration support. Another approach is just to handle it altogether externally, which is what I do for Rhinowiki. The Rhinowiki store is a git repository, and it's managed out of band with respect to the deployment of Rhinowiki itself.

But these are both specific problems that can be managed for smaller applications. Often times, it's worth the costs associated with these problems, to gain the benefits of reducing the number of software components and moving pieces. If you're in a situation like that, I hope you give this approach a try and find it useful. Please let me know if you do.

January 24, 2019

Despite several good online resources, it's not necessarily obvious how friend's wrap-authorize interacts with Compojure routing.

This set of routes handles /4 incorrectly:

(defroutes app-routes
  (GET "/1" [] (site-page 1))
  (GET "/2" [] (site-page 2))
  (friend/wrap-authorize (GET "/3" [] (site-page 3)) #{::user})
  (GET "/4" [] (site-page 4)))

Any attempt to route to /4 for a user that doesn't have the ::user role will fail with the same error you would expect to (and do) get from an unauthorized attempt to route to /3. The reason this happens is that Compojure considers the four routes in the sequence in which they are listed and wrap-authorize works by throw-ing out if there is an authorization error (and aborting the routing entirely).

So, even though the code looks like the authorization check is associated with /3, it's really associated with the point in evaluation after /2 is considered, but before /3 or /4. So for an unauthorized user of /3, Compojure never considers either the the /3 or /4 routes. /4 (and anything that might follow it) is hidden behind the same security as /3.

This is what's meant when the documentation says to do the authorization check after the routing and not before. Let the route decide if the authorization check gets run and then your other routes won't be impacted by authorization checks that don't apply.

What that looks like in code is this (with the friend/authorize check inside the body of the route):

(defroutes app-routes
  (GET "/1" [] (site-page 1))
  (GET "/2" [] (site-page 2))
  (GET "/3" [] (friend/authorize #{::user} (site-page 3)))
  (GET "/4" [] (site-page 4)))

The documentation does mention the use of context to help solve this problem. Where that plays a role is when a set of routes need to be hidden behind the same authorization check. But the essential point is to check and enforce authorization only after you know you need to do it.

August 3, 2018

It's been a long time coming, but I've finally replaced blosxom with a custom CMS I've been writing called Rhinowiki. More than a serious attempt at a CMS, this is mainly a fun little side project to write some Clojure, experiment a bit with JGit, and hopefully make it easier to implement a few of my longer term plans that might have been tricky to do in straight Perl.

Full source in the link above, a high level summary here:

  • Everything is in Clojure.
  • Backend format is Markdown as interpreted by markdown-clj.
  • Source code is highlighted using highlight.js.
  • Markdown rendering is done entirely on the server, with syntax highlighting on the client. (I'm looking into Nashorn to run highlight.js server side too, but don't know if that's possible within my time constraints.)
  • Back end storage is managed using and retrieved via JGit.
  • All requests are served out of memory.
  • There's a hand rolled (and conformant) Atom feed.
  • Also RSS 2.0.
Older Articles...