Why Clojure is not just Yet Another Lisp

Clojure

I have been using Clojure as my primary programming language at work for over two years now (and I absolutely love it!). I still run into a lot of developer friends or co-workers who fall in one of the two groups:

  1. Folks who have never heard of Clojure or Lisps in general.
  2. Folks who have heard of Clojure, but dismissed it as Yet Another Lisp.

In this blog post, I hope to talk to group #2 about what makes Clojure special. But I will start with a brief background for the benefit of group #1.

Unless you have been living under a rock, I assume you would have heard of Functional Programming (FP). It has been gaining momentum in recent times, especially with popular front-end frameworks like React, Redux adopting that paradigm. Even the poster-child of OOP, Java has started supporting FP from Java8 onwards. Modern day services need to be able to scale globally – to serve a global audience through the internet, making concurrency a necessity. Thats where FP shines. Lisps are a class of programming languages, dating back to 1958, built on top of Lambda Calculus, that favor the functional paradigm. Clojure is one of the more recent lisps, created in 2007 by Rich Hickey.

What makes Clojure stand out from other Lisps?

1. Persistent Immutable Data Structures

Clojure has a rich set of data structures. What most people don’t realize is how bleeding edge these are in terms of language/data-structure design. These are a culmination of building on top of and improving upon decades of progress in the field.

Data structure evolution over time leading up-to Clojure. Courtesy: What Lies Beneath - A Deep Dive Into Clojure's Data Structures - Mohit Thatte.
Data structure evolution over time leading up-to Clojure. Courtesy: What Lies Beneath – A Deep Dive Into Clojure’s Data Structures – Mohit Thatte.

Immutable

Most programmers who have had to face/fix unanticipated production bugs in their careers know about the dangers of mutability. Java developers have tried to cope up with this using measures such as sprinkling final keywords all over their codebases.

Clojure solves the problem at a higher level of abstraction by making all data-structures immutable by default. If you are new to this concept, you might be wondering how anyone can get anything done if everything is immutable. The key to understanding how this works is to realize the distinction between value and identity, concepts which are usually conflated in OOP/imperative world. In simpler terms, whenever you need to change something, you create a new version of it which is unrelated to the original version from a usage perspective.

If you think about how you might implement immutability, a simple solution might be to copy over the whole data-structure each time it is modified. But you will soon realize how inefficient the scheme is. This is where all the advanced data-structures (such as a Hash array mapped trie or HAMT) that Clojure employs under-the-hood come into play. Historically, you used to have to choose between immutability and performance. With Clojure, you can get both!

Persistent

As mentioned above, with immutability, you create different versions of the same data-structure over time. Clojure implements immutable data-structures in a way that you can grab any of these previous or current version of the data and further modify them to create newer versions without any issues. This property is called persistence.

Sharing of data across versions in Clojure. 
Courtesy: Practicalli - Theory: Persistent Data Structures
Sharing of data across versions in Clojure.
Courtesy: Practicalli – Theory: Persistent Data Structures

2. Inter-operable with the JVM (as well as .Net and JavaScript)

Clojure compiles to Java bytecode and can run on the JVM. I cannot stress how much of a game-changer this was for my team. In general, this makes it much easier to start using Clojure in a corporate setting for the following reasons:

  1. You can sneak in Clojure just like a library into your existing Java code base.
  2. You or the Clojure community does not have to re-invent the wheel by re-creating every useful library that already exists in Java.

The inability to do these two have been a significant limiting factor in the adoption of other lisps in the past. The second point is huge. Not only can you take advantage of all the libraries in JVM, your Clojure code will work with your company tooling/frameworks built for the JVM seamlessly. The interoperability is really good. You can even expose Java methods and interfaces to your Clojure code so that others can’t tell the difference (see gen-classes).

The above points hold for the .Net ecosystem as well as the JavaScript ecosystem since Clojure has implementations that compile to the respective runtime formats. The JS version is called ClojureScript.

3. Clojure.spec (Bonus)

Clojure is a dynamic programming language, like most other lisps. One of the key practical implications of that is that you won’t have static type checking for your code. While static type checking does help you catch some errors at compile time occasionally, it has two major limitations:

  1. The programmer does not get to choose what/where to do static typing. Everything is typed. This amounts to a lot of verbosity and unnecessary noise in the code.
  2. The type system is fairly limited in its expressibility. You can only check the data-types provided by the language designers.

Unless you experience it first-hand, it is hard to imagine a world where you get a stronger verification capability than that of static typing, but without these two limitations. Clojure spec provides us exactly this. While Clojure spec is not aimed to be a type system, it is much more than that. The best part is that you get Spec out of the box with Clojure, not as a separate library.

To illustrate the point of expressibility, think of the primitive data-types in your favorite statically typed language (say Java). You have your Integer and String data-types. What if in your domain/program, you wish to care about positive and negative numbers with the same level of distinction. You cannot in an easy/straightforward way without creating new classes and objects. This gets trickier if your type definitions are more complex or involves dynamism.

With Clojure spec, you can create new types using arbitrary logic predicates (such as int?, pos? etc.). Here is an example creating a new spec called pos-int which describes a positive integer.

(s/def ::pos-int (s/and int? pos?))

You can choose to spec your important data (primarily interfaces) and validate that you are receiving/outputting data that meets the spec easily at runtime. Spec also supports generative testing by auto-generating test data based on the specification. Rather than you calling your function with arbitrary dummy data in your unit/integration tests, you spec its input/output and use that to auto-generate 100s of test inputs to run the function against and programmatically validate that the output meets the spec.

Conclusion

There are many other reasons to love Clojure. Here is a recent article from Uncle Bob (of CleanCode fame) on why he loves Clojure – link. However, the above ones are the key reasons why I think Clojure will succeed where the other Lisps failed.

I hope you will give Clojure a serious consideration for your next (hobby or work) project. While you are here, may I also recommend catching up on Rich Hickey’s (creator of Clojure) Greatest Hit talks – link. These will definitely make you a better programmer, even if you end up never using Clojure.

“every time I watch one of his talks I feel like someone has gone in and organized my brain”

https://github.com/tallesl/Rich-Hickey-fanclub

Also, a shout-out to Spacemacs as a great editor for Clojure (and everything else).

Big Data Analytics for Healthcare – Experience

One of the cute course slides by Prof. Jimeng Sun

I completed Big Data Analytics for Healthcare (CSE 6250) as part of Spring 2019 of my OMSCS program – my last semester! It was ranked as the hardest course in the program as per OMSCentral, a befitting finale to my 3.5 year long OMSCS journey.

As some might think, I did not choose to take the course because I am a masochist. Rather, the course seemed to cover a lot of ground that is not explored by any other courses in the program. In particular, I was interested in learning more about:

  • Hadoop, Pig, Hive, MapReduce
  • Spark (RDDs, Spark SQL, Mlib, GraphX etc.)
  • CNNs, RNNs
  • Scala

For me, the last reason was one of the major considerations. I consider myself a programming language geek and wouldn’t miss a chance to play around with a new programming language if I can. I must say I was in for a treat. I got to write some significant code in Scala. The sample code, template etc. that they provided for the projects were leveraging some of the nice parts of Scala such as type inference, case classes, pattern matching etc. All this along with its functional nature and full-fledged REPL made Scala a delight to work with.

The course was pretty well designed overall. The well maintained self-paced labs available for almost all topics were really helpful in getting up to speed. Check out their Scala lab to get a feel – link. The course videos were well planned. The TAs were pretty responsive in Piazza. All the good stuff.

The final project was the highlight of the course. It was a group project, worth 40% of the overall grade. Teams were allowed to pick from of a variety of interesting topics. The project milestones were structured in a way similar to how someone would go about pitching a data-science project in a corporate setting (say to your CEO), execute it and come up with a report and presentation summarizing that work.

I was fortunate to get a really good team. We were four. We chose the domain NLP for Healthcare. After exploring a variety of suggested ideas in the domain, we finalized on the following topic: Hierarchical Ensembles of Heterogeneous Models for Prediction of Medical Codes from Clinical Text. We worked well together and did a good amount of research and implementation. We were able to try out various ensemble ML models combining SVMs, DNNs etc as part of the project.

While there were homeworks to keep you busy on most weekends, I felt that the notoriety of the course in terms of difficulty was unfounded. If you go in with an open mind, excited about a fast paced journey across the various big-data tools/technologies being used for data-science in the industry, you won’t be disappointed.

  • Difficulty: 4/5
  • Rating: 5/5

On related news, I got my Georgia Tech Masters degree in my email couple of weeks back! It was truly a moment of happiness. I still need to figure out what to do with all the extra time now that the course is over ūüôā

My Georgia Tech Masters degree ūüôā

Graduate Algorithms – Experience

I completed Graduate Algorithms course (CS 8803) as part of my OMSCS program during Fall 2018. It was an interesting and really informative course.

The main topics covered in the course include dynamic programming; divide and conquer, including FFT; randomized algorithms, including RSA cryptosystem and hashing using Bloom filters;  graph algorithms; max-flow algorithms; linear programming; and NP-completeness.

Course website

Of these, I had not studied FFTs and max-flow algorithms during my undergrad. Also, though I had studied basic DP (dynamic programming) in my undergrad (KnapSack, Matrix multiplication etc.) and had prepared some more for tech interviews, I had never really had a rigorous formal training on DP before. This course had sufficient course work (home-works and exams) with the focus on building that DP intuition that I really liked.

The course text was Algorithms by Dasgupta, Papadimitriou and Vazirani. It is a really good, concise introduction to most advanced algorithm topics. It is especially good as a textbook for colleges because you can realistically expect students to read from cover to cover unlike say Introductions to Algorithms by Cormen, which is better suited as a long term reference manual. My one gripe with the course was that it did not cover the last chapter from Dasgupta, on Quantum Computing.

The course grading breakdown was as follows:

  • Homeworks: 5%
  • Project: 10%
  • Midterms: (two) 25% each
  • Final exam: 35%

The midterms and final exam were closed book 3-hour exams. The bulk of the grade (85%) was from these. There were 8 home-works, spaced roughly once a week which added up to just 5%. These usually involve 4-5 questions which would require you to write your solution in pseudo-code (only in case of DP) or in plain English. One might think that doing the home-works is just not worth the effort. However, I cannot over emphasize how important and helpful to the exams these were.

Another thing that I am ever grateful to the homeworks is for introducing me to Latex. Till now, though I had heard that Latex is pretty good for writing technical documents, I never really had a good use-case or forcing function to make me learn it. The HWs usually involved formulas, bigO complexities, pseudo-code, matrices etc. This gave me a really good opportunity to learn Latex. Its amazing!

While we are on the topic of Latex, if it is something that interests you, may I suggest the Emacs plugin for Latex? I used the Spacemacs latex layer. Among other things, it provides previewing the rendered doc in Emacs itself (SPC m p p)!

The TAs and the Professor were prompt on answering queries on Piazza and office hours. The grading turn-around times were also really fast. Overall, I really enjoyed the course. My ratings:

  • Difficulty: 4/5
  • Rating: 4.5/5

Machine Learning for Trading – Experience

I completed the Machine Learning for Trading (CS 7647-O01) course during the Summer of 2018. This was a fun and light course.

The course was divided into 3 mini-courses:

The first part of the course was mainly about getting familiar with Numpy and Pandas. The key take-away was how small differences (optimizations) in how you handle large arrays using Numpy can lead to substantial performance gains.

For me, the second part was the most enlightening. It introduced the various concepts within the Stock market. This was something I’ve always wanted to and in-fact have been learning on an ad-hoc need-to-know basis from Wikipedia, Investopedia etc. However, I’ve always wanted a more formal and comprehensive introduction to these concepts. Some of the ones discussed include:

  1. Going long or short with shares
  2. Capital Assets Pricing Model (CAPM)
  3. Efficient Markets Hypothesis
  4. The Fundamental Law of active portfolio management
  5. Arbitrage Pricing Theory
  6. Technical analysis (Bollinger Bands, Sharpe ratio etc.)

The third part was about using different ML techniques (supervised – Regression, Decision Trees, unsupervised – Q-learning etc) to predict future stock prices based on past data. I was familiar with most of these techniques from previous AI/ML courses I had taken. However, using those with time-series data in this setup was something new.

There were five (4 proper and 1 intro) projects in this course and 2 closed-book exams. The professor also recommended some good/interesting external resources for students to understand the stock market better. The movie The Big Short was even part of the syllabus. The course was well-paced and well organized. My ratings:

  • Difficulty: 2.5/5
  • Rating: 4/5

The Annotated Turing – Abridged

I recently read the book The Annotated Turing¬†by Charles Petzold. It was really enlightening. Though all computer science graduates study concepts like Turing machines, Turing Completeness etc. as part of their courses in colleges, questions such as how and why did Turing come up with these concepts is not taught usually (spoiler alert: Turing did not come up with either terminology¬† that I just mentioned). Charles does a good job of making Turing’s classical 1937 paper more accessible without dumbing it down too much. He also gives a lot of historical context as well as background knowledge that Turing assumes his readers to have.

AnnotatedTuringCover50
The book cover

The book is aptly sub-titled A Guided Tour through Alan Turing’s Historic Paper on Computability and the Turing Machine.¬† In it, Turing’s paper is shown as is, but is supplemented with the author’s own annotations in between. In this post, I will try to present a very concise overview of the key takeaways from the book.

Das Entscheidungs problem

Turing’s actual 1937 paper was titled On Computable Numbers, With An Application To The Entscheidungsproblem. To understand the paper, we need to understand what that problem is. Entshcheidung means decision or decidability. In 1900, during his address at the Second International Congress of Mathematicians,¬† one of the prominent mathematicians at the time, David Hilbert, presented 23 questions as challenges for mathematicians to solve in the next century. The 10th problem in the list was this:

10. Determination of the Solvability of a Diophantine Equation

Diophantus was a Greek mathematician who is estimated to have lived around 250 CE and wrote a book called Arithmetica. Diophantine equation has come to mean an algebraic equation in which only whole number solutions are allowed.

Given such a problem, Hilbert was interested in a process with finite number of steps to determine whether the problem has a solution or not i.e. is the problem solvable? This problem came to be known as the Entscheidungsproblem or decision problem. Note, Hilbert is not interested in a general method to solve Diophantine equations here.

Turing proves that no such general decision process exists.

¬†G√∂del’s Incompleteness Theorem

As Turing himself remarks in the paper, his results are superficially similar to the famous Incompleteness Theorem by Kurt Gödel from 1931. However, he clarifies the difference between both results.

Gödel essentially shows that any consistent formal system is necessarily incomplete. We can find propositions P such that neither P nor ~P (negation of P) is provable within that system.

Till that time, mathematicians were attempting to create flawless systems which were both complete and consistent.¬†G√∂del’s results proved that such attempts were in vain.

Even with these results, a general decision process as described by Hilbert could conceivably exist. Given an unprovable proposition, it would identify it as unprovable. It would similarly identify provable propositions also correctly. Such a decision process is what Turing shows as cannot exist.

Turing’s solution

Turing’s approach leverages Predicate Calculus extensively. I will not try to explain the details of Turing’s proof here (for that, you need to read the book), but I will try to summarize the key ideas that he builds up and uses in his proof.

Computing Machine (M): is a machine which

  1. is supplied with a tape running through it, divided into sections (squares) capable of bearing a symbol each.
  2. whose motion at each stage is completely defined by its configuration.
  3. and can perform one of the following operations at any moment:
    1. read one symbol
    2. print one of two kinds of symbols onto a square:
      1. figures (either 0 or 1)
      2. other symbols
    3. change the square being scanned (by shifting left or right)

Here configuration is what tells the machine what action to perform when it encounters a symbol. Henceforth, I’ll refer to Computing Machines as just machines.

Circular and Circle free machines: If a machine never writes more than a finite number of symbols of the first kind (0 or 1), it is called circular. Otherwise, it is circle-free. This notion becomes one of the most crucial aspects of Turing’s proof later.

Notice that the terminology here is a bit counter-intuitive. In Turing’s formulation, a machine which does not stops is considered good (circle-free) whereas one which stops after finite steps is bad (circular).

Computable Sequences: A sequence is said to be computable if it can be computed by a circle-free machine. Turing shows that his machines are capable of computing all sequences that we generally consider as computable (such as pi).

Description Number (D.N): Turing shows how all machines can be represented using a standard form such as:

q1 S0 S1 R q2; q2 S0 S0 R q3; q3 S0 S2 R q4; q4 S0 S0 R q1;

Here the first section (separated by a ; ) => q1 S0 S1 R q2;

can be interpreted as => From q1 state, if the machine reads symbol S0, print the symbol S1, shift right and transition to state q2.

He modifies (serializes) the form to a notation that can be more easily parsed by another machine. This notation is called the Standard Description (S.D) of the machine. S.D for the above machine for eg:

DADDCRDAA; DAADDRDAAA; DAAADDCCRDAAAA; DAAAADDRDA

He further says each such S.D can be represented by a number (by replacing A with 1, C with 2 etc.). This is the Description Number (D.N) for that machine. Note that S.D and D.N can be used inter changeably as you can always convert from to the other.

Universal Machine (U): This is one of the most brilliant insights of Alan Turing. Once he shows that any machine can be represented as a number, he creates a machine which can take the Description Number of any other machine M as input and prints the same output as M. This can be considered as the first ‘general-purpose’ computer ever conceived.

In the next section, I will describe how Turing uses this universal machine to prove that the decision problem cannot be solved. However, it is worth appreciating that we remember Turing in the present more for his conception of Computing Machines (later known as Turing Machines), the Universal Machine and its limitations rather than for proving the Decision Problem. This is one of the cases in history where the journey contributed more than the destination (end goal).

The Proof

The high level approach that Turing takes to solve the decision problem can be outlined as follows:

Turing_Decision_Problem_Proof2

Each of these steps might seem like a logical jump from the previous one, but they do follow from the previous ones consistently. However, there is a lot of tedious details to the representation of these statements in predicate calculus and hence in the proofs.

I will try to present an brief explanation of the first of these four steps. However, by definition, this will not be rigorous or complete.

No general process exists to determine whether a machine is circle free

Turing proves this by reductio ad absurdum or proof by contradiction. First, he assumes that there exists a machine D¬†which given the S.D of any machine M, tests this S.D and if M is circular, outputs “u” (unsatisfactory) and if it is circle-free, outputs “s” (satisfactory) in a finite number of steps.

If M is circle-free, we can give the S.D of M to the Universal Machine (U) to compute the output of M. Turing combines D and U to create a machine H which works as follows:

Circle_Free_H_Flowchart2
H flowchart

Here R(N) keeps track of the number of valid circle-free machines encountered so far.
If you follow along the flow-chart, you’ll see that the Nth digit of H‘s output is the Nth digit in the output of a Nth valid circle-free machine.

For e.g., if you count up from 1:

1
2
3
4
...
313,325,317 -> M1 - Prints 0s to the right
...
3,133,225,317 -> M2 - Prints 1s to the right
...

The first valid D.N that you hit upon is 313,325,317. Let’s call this machine M1. Its output is an unending sequence (as it is circle-free) of 0s. So the first digit in the output of H will be 0 (the first digit of M1).

Similarly, the second valid D.N that you will find will be 3,133,225,317, for M2. Its output will be an unending sequence of 1s. Hence the second digit in the output of H will be 1 (the second digit of M2).

Here, Turing points out that the machine H has to be circle-free by our construction. It is composed on D and U. D is circle-free by our assumption. U only computes finite digits of outputs for circle-free machines. Hence H, the combination of both has to be circle-free.

Now the problem arises when H reaches its own D.N, let’s call it K. Now, how will D classify K? It cannot be circular (or unsatisfactory, u) as pointed out above. But if it is circle-free, U has to print the R(K)th digit of H.

Let’s see..
How many digits would have H printed by the time it reaches K?  R(K Р1).
Now, it order to compute its R(K)th (i.e. R(K – 1) + 1) digit, it needs to evaluate itself using U, compute R(K) digits and then print its R(K)th digit. When U evaluates H and reaches the R(K)th digit, it needs to evaluate H again. This leads to an infinite recursion!

By contradiction, Turing proves that there cannot be a general process by which we can determine whether a given S.D is that of a circle-free machine.

This is the crux of Turing’s proof. A variation of this problem is what has come to be known as the Halting Problem (defined by Martin Davis in 1958).

From here, Turing proves the 2nd step by reducing it to the first. Then he proves the 3rd step by reducing it to the 2nd and so on. His machines do not have a notion of algebra or even numbers to begin with. Like I said earlier, there is extensive use of first-order predicate calculus to formulate a number system and to actually get to the decision problem from here.
Maybe that can be a subject for a second post later.

Turing Completeness

As mentioned before, this terminology was coined later and was never used by Turing himself. In Computability Theory, a system is said to be Turing Complete if it can be used to simulate any Turing machine. It implies that the system is theoretically capable of expressing all tasks accomplishable by computers. Most general-purpose programming languages are Turing Complete.

Turing and Lambda Calculus

This book brought a lot of clarity to the relationship between Lambda Calculus and Turing. Before that, for some reason, I had always assumed Lambda Calculus to be more recent than Turing.

In reality, Alonzo Church, creator of Lambda Calculus beat Turing by a very close margin in solving the decision problem. Church’s paper A Note on the Enscheidungsproblem¬†which also proved that the decision problem was unsolvable was received by The Journal of Symbolic Logic on April 15, 1936, six weeks before Turing’s submission to the London Mathematical Society on May 28, 1936!

Turing then had to publish a 3-page appendix to his paper (in August 1936) proving that his notion of computability and Church’s notion of effective calculability are both effectively equal. Alonzo Church had got his Ph.D in mathematics from Princeton and had taught there as a professor from 1929 to 1967. In the fall of 1936, Turing went to Princeton to get his Ph.D under Alonzo Church. Among his students, Church had other famous mathematicians such as Stephen Kleene and Martin Davis who expanded on both Turing’s and Church’s works later.
Lambda Calculus has had a direct influence on functional programming languages such as Lisp, Haskell, Scheme, Clojure etc.

 

Intro to High Performance Computing – Experience

This Spring, I completed the Intro to High Performance Computing course (CSE 6220) as part of OMSCS. It is one of the hardest (4.5/5) and also the highest rated (4.8/5) course in the program as per OMS Central. Based on my experience,  I concur with both ratings.

At a high level, the course covers the algorithmic aspects of maximizing the performance of your code. This includes things like parallelizing your code across all processors or across multiple machines, exploiting the memory hierarchy to your advantage etc. The other ‘high performance’ course in the program – High Performance Computer Architectures (CS 6290), in contrast, discusses maximizing performance more at a processor architecture level.

Prof. Vuduc requires special mention. He has put a lot of effort in making the course videos easy-to-understand and interesting. His hilarious antics make you laugh even while discussing the most complex topics. He is also very active in Piazza and participates in the office hours regularly.

There were 5 hands-on projects in total, all in C/C++, with one due every two weeks. These were really the time-sinks. Interestingly, these were also the most fun part of the course in my experience. These involved implementing the algorithms taught in the lectures, making everything you learn more ‘real’.

Key concepts

At a broad level, these were the key concepts I learned from the course:

  1. Shared memory model (aka dynamic multithreading model)
    1. Concepts of work and span (link) in analyzing parallel programs.
    2. Introduction to OpenMP library.
  2. Distributed memory models
    1. Parallel computing across network using message passing.
    2. The Alpha-Beta model (aka latency & inverse-bandwidth model) for analyzing distributed parallel programs.
    3. Introduction to OpenMPI library.
  3. Two level memory model
    1. I/O aware algorithms that can exploit the cache and main memory structures.
    2. Cache oblivious algorithms that still achieve optimal performance without being aware of the cache/memory structures.

alpha_beta_model
The Alpha-Beta model for measuring the cost of distributed computing

For a more detailed overview, I would recommend going over the Course Syllabus (PDF).

Overall, I really enjoyed the course and am happy that I decided to take it though it was not directly related to my specialization (Interactive Intelligence).

Hyperledger Sawtooth – Introductory tutorial

Note: This blog post was originally written on April 2018 and depends on sawtooth-core v1.1.1. The instructions mentioned may not work for newer versions of Sawtooth.

In this blog post, I will try to quickly cover Hyperledger Sawtooth basics and explain how to get to a productive development setup starting from the Sawtooth core codebase.

Hyperledger is an open-source collaborative effort to create industry standard blockchain technologies. It is backed by the Linux Foundation. You can read more about the effort here – https://www.hyperledger.org/about.

This edX course would be a good starting place for anyone interested in Hyperledger – https://www.edx.org/course/blockchain-business-introduction-linuxfoundationx-lfs171x.

Sawtooth is one of the mature projects (1.0 released in Jan 2018) within Hyperledger, initially contributed by Intel. It is designed with Enterprise use-cases in mind. Among other things, Sawtooth introduces a novel consensus algorithm called Proof of Elapsed Time (PoET), has support for permissioned private blockchains, parallel transaction execution, dynamic switching of consensus algorithms, compatibility with Ethereum (via Seth transaction family) etc. It also has development support for a broad language base including Python, Go, Java etc.

Why this post?

Sawtooth is one of the most well documented piece of open-source software that I’ve come across.  I would recommend anyone interested in learning Sawtooth to go through it – link.

Having said that, I feel that the level of information in the docs can be a bit daunting, especially for someone new to blockchain. Also, in my case, the fact that I wasn’t very comfortable with docker added an additional layer of indirection/complexity in the learning process. I need to be able to see how something starts up beginning from the code to fully understand how it works.

I couldn’t find any tutorials on Sawtooth outside the documentation as of today. I reached out to the friendly experts hanging out in the  Sawtooth Rocket chat channel to clear many issues I came across while trying to play with Sawtooth. This post is my effort to give back to the community, in the form of a basic intro to setting up sawtooth from scratch.

Components

Sawtooth_node
Sawtooth components

Transaction Processor

The transaction processor maintains all domain specific business logic for a transaction family (similar to smart contracts) such as:

  1. Serializing/Deserializing the state stored in the ledger to domain objects.
  2. Providing the business logic for validating transactions

Validator

The validator handles consensus, all blockchain read/writes, peer interaction etc. It is agnostic of the business logic. In fact, one validator process running in a host can be connected to multiple transaction processor processes.

Client

Clients (CLI, web services etc.) interact with a transaction processor via a REST API (a separate process).

If you are interested in playing around with Sawtooth using pre-built images, the installation section of Sawtooth would be helpful – link. You can use docker images, Ubuntu packages or install Sawtooth from the AWS marketplace.

Getting your hands dirty

In this section, I will try to get a working instance of Intkey transaction processor (TP) running locally in Python. Intkey is a sample TP for setting, incrementing and decrementing values in a dictionary. First of all, checkout the sawtooth-core code-base

git clone https://github.com/hyperledger/sawtooth-core

We will still use a provided docker container for running all our processes to save us the trouble of getting the right version of all the dependencies required. In our case, we will use the sawtooth-dev-python image. Also, we will build and use the validator and REST API directly (not from code) as these don’t need to be changed by application developers.

Setup the validator

Build everything (install docker if you haven’t):

$ cd sawtooth-core/docker
$ docker build . -f sawtooth-dev-python -t sawtooth-dev-python

Now if you list your docker images, you should see the newly created one:

$ docker images
REPOSITORY          TAG    IMAGE ID     CREATED    SIZE
sawtooth-dev-python latest ecf3ae086754 2 days ago 817MB

Next, let’s start a container and get inside it:

$ cd sawtooth-core
$ docker run -v $(pwd):/project/sawtooth-core -it sawtooth-dev-python bash

This should start a container using the sawtooth-dev-python image and get you to a root prompt prompt within the container with our present working directory (sawtooth-core) mounted as /project/sawtooth-core.

root@43edef3881be:/project/sawtooth-core# ls

Before we start up the validator, we need to do some setup here. Basically create the genesis block and required keys.

# sawtooth keygen
# sawset genesis
# sawadm genesis config-genesis.batch

Next, let’s start up the validator:

# sawadm keygen
# cd bin/
# sawtooth-validator -vv

You would want to keep the validator running for the rest of the steps to work. For that, you will need to connect to the container from a different terminal. For that, you need to find the container id for the container that you just created:

$ docker container ls
CONTAINER ID IMAGE               COMMAND CREATED   STATUS    PORTS              NAMES
43edef3881be sawtooth-dev-python "bash" 8 days ago Up 8 days 4004/tcp, 8008/tcp keen_keller

Now, to get into the container from this new terminal, you just need to do the following:

$ docker exec -it 43edef3881be bash

Now, we need to start up the settings transaction processor which is required for the validator to function correctly:

# cd bin/
# settings-tp -vv

You should see some logging in the validator terminal tab/window along these lines now:

[2018-04-12 07:25:27.657 INFO processor_handlers] registered transaction processor: connection_id=f519f4ff0bd1e26968d5bcd76cd71eed88d097c5a4846798c35f5d9c5efeb8845ea65fdf2172c12c7b4a226fc4214c18f6989c4048e0012c2fb5378252d67a08, family=sawtooth_settings, version=1.0, namespaces=['000000'], max_occupancy=10
[2018-04-12 07:25:27.705 INFO genesis] Genesis block created: 69f85216c3137fdc0ee31f8d754dfd134198b6d97f31c45e52ccbb3af356bbbe443aa38ba6a3380a7aab734b939881dbb198367311423b0b8d9aa39569d186eb (block_num:0, state:2cd74660fc59b472699aad3f6da884ae636191673e6773e6d81b9c8987065e9f, previous_block_id:0000000000000000)
[2018-04-12 07:25:27.709 INFO interconnect] Listening on tcp://127.0.0.1:8800
[2018-04-12 07:25:27.712 INFO chain] Chain controller initialized with chain head: 69f85216c3137fdc0ee31f8d754dfd134198b6d97f31c45e52ccbb3af356bbbe443aa38ba6a3380a7aab734b939881dbb198367311423b0b8d9aa39569d186eb (block_num:0, state:2cd74660fc59b472699aad3f6da884ae636191673e6773e6d81b9c8987065e9f, previous_block_id:0000000000000000)

There is one more piece required before we can get to our Intkey transaction processor – the REST API. Again, from a new terminal, get into the container and run:

# cd bin/
# sawtooth-rest-api -vv

At this point, you have a functional sawtooth validator. You can play around with it using the sawtooth command. Some examples:

# sawtooth state list 
ADDRESS SIZE DATA
000000a87cb5eafdcca6a8cde0fb0dec1400c5ab274474a6aa82c12840f169a04216b7 110 b'\n...
HEAD BLOCK: "69f85216c3137fdc0ee31f8d754dfd134198b6d97f31c45e52ccbb3af356bbbe443aa38ba6a3380a7aab734b939881dbb198367311423b0b8d9aa39569d186eb"
# sawtooth transaction list
TRANSACTION_ID FAMILY VERS SIZE PAYLOAD
5964642adb957ca994f5789e9a5f9930853c20359a8adbc5c18ecf4b338fc9a00f544f42cf17d7cdc79531727fbdb307f4143c8104ec9ed0e5c3557a476cccdb sawtooth_settings 1.0 131 b'\x08\...

You can also directly query the REST API as follows:

# curl http://localhost:8008/blocks
{
 "data": [
 {
 "batches": [
 {
 "header": {
 "signer_public_key": "0317f01f8958bc6d404d1ffe88770fe927fef63022216f24484906760873501d7f",
 "transaction_ids": [
 "5964642adb957ca994f5789e9a5f9930853c20359a8adbc5c18ecf4b338fc9a00f544f42cf17d7cdc79531727fbdb307f4143c8104ec9ed0e5c3557a476cccdb"
 ]
 }.. // trimmed to keep it short

Intkey

The Transaction Processor

You’ll notice that pre-build Intkey transaction processor binaries are also available in the bin folder. However let’s try to run it from directly from the code. The main.py (code link) in the following directory actually starts the example Intkey processor:

/project/sawtooth-core/sdk/examples/intkey_python/sawtooth_intkey/processor/main.py

We need python3 for some of the dependencies. But if you directly try to run main.py from python, you will run into couple of issues. You’ll notice it depends on the sawtooth_sdk module, which in turn depends on sawtooth_signing module. Lets build them first.

// First, lets install sawtooth_signing
# cd /project/sawtooth-core/signing/
# python3 setup.py install

// Next, lets install sawtooth_sdk
# cd /project/sawtooth-core/sdk/python/sawtooth_sdk
# python3 setup.py install

Now, we need to tweak the main.py file a bit to work in run-from-source approach.

It tries to import from sawtooth_intkey.processor.handler module, which it will fail to find. The handler.py is however right besides the main.py in the same directory. So lets modify main.py to use that:

- from sawtooth_intkey.processor.handler import IntkeyTransactionHandler
+ from handler import IntkeyTransactionHandler

At this point, we should be able to run main.py without any errors. But it will exit without doing anything. Thats because no-one is calling the main method. Let’s add a bit of code at the end of the file to do that:

if __name__ == "__main__":
    main()

Notice that you can modify the file in your host machine itself i.e outside the container. Since the sawtooth-core folder is shared with the container, it will be automatically up-to date with the latest code.

Now, your Intkey transaction processor is ready to roll! You can try some of the commands as follows:

# python3 main.py -V
sawtooth-intkey (Hyperledger Sawtooth) version UNKNOWN

Here is how you really start it:

# python3 main.py -vv
[2018-04-12 08:09:31.254 INFO core] register attempt: OK

You should see the validator also logging in its terminal the fact that intkey TP connected successfully. Something like:

[2018-04-12 08:09:31.248 INFO processor_handlers] registered transaction processor: connection_id=acf1ed62db9a2e18c88af958532ac7c9857d6db50d951348eabda8ba5e0913e9a4deeb3fa607cbf6897efdc0585559ea38470cfbbe96141668369db16d3f45a5, family=intkey, version=1.0, namespaces=['1cf126'], max_occupancy=10

Intkey CLI

Let’s try running the CLI to interact with the Intkey transaction processor.

You will notice the provided intkey_cli.py file references other files in its directory using sawtooth_intkey.client_cli.xyz path. For it to be able to find these files correctly, lets copy it to the appropriate location:

# cd /project/sawtooth-core/sdk/examples/intkey_python/
# cp sawtooth_intkey/client_cli/intkey_cli.py .

Next, similar to the earlier main.py, we need to add an entry point for the intkey_cli.py. Add the following snippet at the end of the file:

if __name__ == '__main__':
    main_wrapper()

This completes our setup! Let’s play around with it:

# python3 intkey_cli.py --help
usage: intkey_cli.py [-h] [-v] [-V]
 {set,inc,dec,show,list,generate,load,populate,create_batch,workload}
 ...

optional arguments:
 -h, --help show this help message and exit
 -v, --verbose enable more verbose output
 -V, --version display version information

subcommands:
 {set,inc,dec,show,list,generate,load,populate,create_batch,workload}
 set Sets an intkey value
 inc Increments an intkey value
 dec Decrements an intkey value
 show Displays the specified intkey value
 list Displays all intkey values

If you list all the entries in the blockchain now, it should be empty (as we did not insert any):

# python3 intkey_cli.py list

Now, sets try setting and getting a key-value pair:

// Set key 'a' to value 10
# python3 intkey_cli.py set a 10
{
 "link": "http://127.0.0.1:8008/batch_statuses?id=46f82450d39c6c59968efc83a15669477804ccb9ad569a7b61bebecf6cf55f931e0bd19c4de42926b7ce90b320b3ffe50e411130957b1812f8e4f2a45865c8ed"
}

//Let's try listing again
# python3 intkey_cli.py list
a: 10

// Let's try reading the key
# python3 intkey_cli.py show a
a: 10

// Let's try incrementing 'a' by 2
# python3 intkey_cli.py inc a 2
{
 "link": "http://127.0.0.1:8008/batch_statuses?id=8d2f5e0ef324ba15a5cd95392cd65caf6df7124f7c06dc0531feb77cfda49f047c765cb96557a3de0f91902e7b2b3212111b997fe663c7b9bb610bd7a7ad6759"
}

// Show again
# python3 intkey_cli.py show a
a: 12

You can also do batch import of random keys (for testing, measuring throughput etc.) as follows:

// Create 1 batch file with operations on 5 keys
# python3 intkey_cli.py create_batch -K 5 -c 1
Writing to batches.intkey...

// Load the batch file
# python3 intkey_cli.py load
batches: 2 batch/sec: 72.39234705765597

// List the added keys
# python3 intkey_cli.py list
a: 12
ZmyetF: 30086
ReocMV: 59247
CINQSf: 57819
BWADZo: 39267
RoDdEV: 47475

You should be able to see the corresponding logging for each of these commands in the REST API, transaction processor and validator terminal consoles.

In summary, we started by checking out the sawtooth-core codebase and ended with a docker container running the following:

  1. Validator
  2. Settings Transaction Processor
  3. REST API
  4. Intkey Transaction Processor (from source)
  5. Intkey CLI (from source)

This concludes my hands-on overview of working with Hyperledger Sawtooth.

I hope this will be helpful as a good starting place for developers trying to create their own custom transaction families in Sawtooth. In case you run into any issues while following these instructions or have any questions, please let me know. I’ll try to help you out. As I mentioned earlier, the Sawtooth Rocket chat is a good place to get help from experts in case you are stuck.