Graduate Algorithms – Experience

I completed Graduate Algorithms course (CS 8803) as part of my OMSCS program during Fall 2018. It was an interesting and really informative course.

The main topics covered in the course include dynamic programming; divide and conquer, including FFT; randomized algorithms, including RSA cryptosystem and hashing using Bloom filters;  graph algorithms; max-flow algorithms; linear programming; and NP-completeness.

Course website

Of these, I had not studied FFTs and max-flow algorithms during my undergrad. Also, though I had studied basic DP (dynamic programming) in my undergrad (KnapSack, Matrix multiplication etc.) and had prepared some more for tech interviews, I had never really had a rigorous formal training on DP before. This course had sufficient course work (home-works and exams) with the focus on building that DP intuition that I really liked.

The course text was Algorithms by Dasgupta, Papadimitriou and Vazirani. It is a really good, concise introduction to most advanced algorithm topics. It is especially good as a textbook for colleges because you can realistically expect students to read from cover to cover unlike say Introductions to Algorithms by Cormen, which is better suited as a long term reference manual. My one gripe with the course was that it did not cover the last chapter from Dasgupta, on Quantum Computing.

The course grading breakdown was as follows:

  • Homeworks: 5%
  • Project: 10%
  • Midterms: (two) 25% each
  • Final exam: 35%

The midterms and final exam were closed book 3-hour exams. The bulk of the grade (85%) was from these. There were 8 home-works, spaced roughly once a week which added up to just 5%. These usually involve 4-5 questions which would require you to write your solution in pseudo-code (only in case of DP) or in plain English. One might think that doing the home-works is just not worth the effort. However, I cannot over emphasize how important and helpful to the exams these were.

Another thing that I am ever grateful to the homeworks is for introducing me to Latex. Till now, though I had heard that Latex is pretty good for writing technical documents, I never really had a good use-case or forcing function to make me learn it. The HWs usually involved formulas, bigO complexities, pseudo-code, matrices etc. This gave me a really good opportunity to learn Latex. Its amazing!

While we are on the topic of Latex, if it is something that interests you, may I suggest the Emacs plugin for Latex? I used the Spacemacs latex layer. Among other things, it provides previewing the rendered doc in Emacs itself (SPC m p p)!

The TAs and the Professor were prompt on answering queries on Piazza and office hours. The grading turn-around times were also really fast. Overall, I really enjoyed the course. My ratings:

  • Difficulty: 4/5
  • Rating: 4.5/5

Machine Learning for Trading – Experience

I completed the Machine Learning for Trading (CS 7647-O01) course during the Summer of 2018. This was a fun and light course.

The course was divided into 3 mini-courses:

The first part of the course was mainly about getting familiar with Numpy and Pandas. The key take-away was how small differences (optimizations) in how you handle large arrays using Numpy can lead to substantial performance gains.

For me, the second part was the most enlightening. It introduced the various concepts within the Stock market. This was something I’ve always wanted to and in-fact have been learning on an ad-hoc need-to-know basis from Wikipedia, Investopedia etc. However, I’ve always wanted a more formal and comprehensive introduction to these concepts. Some of the ones discussed include:

  1. Going long or short with shares
  2. Capital Assets Pricing Model (CAPM)
  3. Efficient Markets Hypothesis
  4. The Fundamental Law of active portfolio management
  5. Arbitrage Pricing Theory
  6. Technical analysis (Bollinger Bands, Sharpe ratio etc.)

The third part was about using different ML techniques (supervised – Regression, Decision Trees, unsupervised – Q-learning etc) to predict future stock prices based on past data. I was familiar with most of these techniques from previous AI/ML courses I had taken. However, using those with time-series data in this setup was something new.

There were five (4 proper and 1 intro) projects in this course and 2 closed-book exams. The professor also recommended some good/interesting external resources for students to understand the stock market better. The movie The Big Short was even part of the syllabus. The course was well-paced and well organized. My ratings:

  • Difficulty: 2.5/5
  • Rating: 4/5

The Annotated Turing – Abridged

I recently read the book The Annotated Turing by Charles Petzold. It was really enlightening. Though all computer science graduates study concepts like Turing machines, Turing Completeness etc. as part of their courses in colleges, questions such as how and why did Turing come up with these concepts is not taught usually (spoiler alert: Turing did not come up with either terminology  that I just mentioned). Charles does a good job of making Turing’s classical 1937 paper more accessible without dumbing it down too much. He also gives a lot of historical context as well as background knowledge that Turing assumes his readers to have.

The book cover

The book is aptly sub-titled A Guided Tour through Alan Turing’s Historic Paper on Computability and the Turing Machine.  In it, Turing’s paper is shown as is, but is supplemented with the author’s own annotations in between. In this post, I will try to present a very concise overview of the key takeaways from the book.

Das Entscheidungs problem

Turing’s actual 1937 paper was titled On Computable Numbers, With An Application To The Entscheidungsproblem. To understand the paper, we need to understand what that problem is. Entshcheidung means decision or decidability. In 1900, during his address at the Second International Congress of Mathematicians,  one of the prominent mathematicians at the time, David Hilbert, presented 23 questions as challenges for mathematicians to solve in the next century. The 10th problem in the list was this:

10. Determination of the Solvability of a Diophantine Equation

Diophantus was a Greek mathematician who is estimated to have lived around 250 CE and wrote a book called Arithmetica. Diophantine equation has come to mean an algebraic equation in which only whole number solutions are allowed.

Given such a problem, Hilbert was interested in a process with finite number of steps to determine whether the problem has a solution or not i.e. is the problem solvable? This problem came to be known as the Entscheidungsproblem or decision problem. Note, Hilbert is not interested in a general method to solve Diophantine equations here.

Turing proves that no such general decision process exists.

 Gödel’s Incompleteness Theorem

As Turing himself remarks in the paper, his results are superficially similar to the famous Incompleteness Theorem by Kurt Gödel from 1931. However, he clarifies the difference between both results.

Gödel essentially shows that any consistent formal system is necessarily incomplete. We can find propositions P such that neither P nor ~P (negation of P) is provable within that system.

Till that time, mathematicians were attempting to create flawless systems which were both complete and consistent. Gödel’s results proved that such attempts were in vain.

Even with these results, a general decision process as described by Hilbert could conceivably exist. Given an unprovable proposition, it would identify it as unprovable. It would similarly identify provable propositions also correctly. Such a decision process is what Turing shows as cannot exist.

Turing’s solution

Turing’s approach leverages Predicate Calculus extensively. I will not try to explain the details of Turing’s proof here (for that, you need to read the book), but I will try to summarize the key ideas that he builds up and uses in his proof.

Computing Machine (M): is a machine which

  1. is supplied with a tape running through it, divided into sections (squares) capable of bearing a symbol each.
  2. whose motion at each stage is completely defined by its configuration.
  3. and can perform one of the following operations at any moment:
    1. read one symbol
    2. print one of two kinds of symbols onto a square:
      1. figures (either 0 or 1)
      2. other symbols
    3. change the square being scanned (by shifting left or right)

Here configuration is what tells the machine what action to perform when it encounters a symbol. Henceforth, I’ll refer to Computing Machines as just machines.

Circular and Circle free machines: If a machine never writes more than a finite number of symbols of the first kind (0 or 1), it is called circular. Otherwise, it is circle-free. This notion becomes one of the most crucial aspects of Turing’s proof later.

Notice that the terminology here is a bit counter-intuitive. In Turing’s formulation, a machine which does not stops is considered good (circle-free) whereas one which stops after finite steps is bad (circular).

Computable Sequences: A sequence is said to be computable if it can be computed by a circle-free machine. Turing shows that his machines are capable of computing all sequences that we generally consider as computable (such as pi).

Description Number (D.N): Turing shows how all machines can be represented using a standard form such as:

q1 S0 S1 R q2; q2 S0 S0 R q3; q3 S0 S2 R q4; q4 S0 S0 R q1;

Here the first section (separated by a ; ) => q1 S0 S1 R q2;

can be interpreted as => From q1 state, if the machine reads symbol S0, print the symbol S1, shift right and transition to state q2.

He modifies (serializes) the form to a notation that can be more easily parsed by another machine. This notation is called the Standard Description (S.D) of the machine. S.D for the above machine for eg:


He further says each such S.D can be represented by a number (by replacing A with 1, C with 2 etc.). This is the Description Number (D.N) for that machine. Note that S.D and D.N can be used inter changeably as you can always convert from to the other.

Universal Machine (U): This is one of the most brilliant insights of Alan Turing. Once he shows that any machine can be represented as a number, he creates a machine which can take the Description Number of any other machine M as input and prints the same output as M. This can be considered as the first ‘general-purpose’ computer ever conceived.

In the next section, I will describe how Turing uses this universal machine to prove that the decision problem cannot be solved. However, it is worth appreciating that we remember Turing in the present more for his conception of Computing Machines (later known as Turing Machines), the Universal Machine and its limitations rather than for proving the Decision Problem. This is one of the cases in history where the journey contributed more than the destination (end goal).

The Proof

The high level approach that Turing takes to solve the decision problem can be outlined as follows:


Each of these steps might seem like a logical jump from the previous one, but they do follow from the previous ones consistently. However, there is a lot of tedious details to the representation of these statements in predicate calculus and hence in the proofs.

I will try to present an brief explanation of the first of these four steps. However, by definition, this will not be rigorous or complete.

No general process exists to determine whether a machine is circle free

Turing proves this by reductio ad absurdum or proof by contradiction. First, he assumes that there exists a machine D which given the S.D of any machine M, tests this S.D and if M is circular, outputs “u” (unsatisfactory) and if it is circle-free, outputs “s” (satisfactory) in a finite number of steps.

If M is circle-free, we can give the S.D of M to the Universal Machine (U) to compute the output of M. Turing combines D and U to create a machine H which works as follows:

H flowchart

Here R(N) keeps track of the number of valid circle-free machines encountered so far.
If you follow along the flow-chart, you’ll see that the Nth digit of H‘s output is the Nth digit in the output of a Nth valid circle-free machine.

For e.g., if you count up from 1:

313,325,317 -> M1 - Prints 0s to the right
3,133,225,317 -> M2 - Prints 1s to the right

The first valid D.N that you hit upon is 313,325,317. Let’s call this machine M1. Its output is an unending sequence (as it is circle-free) of 0s. So the first digit in the output of H will be 0 (the first digit of M1).

Similarly, the second valid D.N that you will find will be 3,133,225,317, for M2. Its output will be an unending sequence of 1s. Hence the second digit in the output of H will be 1 (the second digit of M2).

Here, Turing points out that the machine H has to be circle-free by our construction. It is composed on D and U. D is circle-free by our assumption. U only computes finite digits of outputs for circle-free machines. Hence H, the combination of both has to be circle-free.

Now the problem arises when H reaches its own D.N, let’s call it K. Now, how will D classify K? It cannot be circular (or unsatisfactory, u) as pointed out above. But if it is circle-free, U has to print the R(K)th digit of H.

Let’s see..
How many digits would have H printed by the time it reaches K?  R(K – 1).
Now, it order to compute its R(K)th (i.e. R(K – 1) + 1) digit, it needs to evaluate itself using U, compute R(K) digits and then print its R(K)th digit. When U evaluates H and reaches the R(K)th digit, it needs to evaluate H again. This leads to an infinite recursion!

By contradiction, Turing proves that there cannot be a general process by which we can determine whether a given S.D is that of a circle-free machine.

This is the crux of Turing’s proof. A variation of this problem is what has come to be known as the Halting Problem (defined by Martin Davis in 1958).

From here, Turing proves the 2nd step by reducing it to the first. Then he proves the 3rd step by reducing it to the 2nd and so on. His machines do not have a notion of algebra or even numbers to begin with. Like I said earlier, there is extensive use of first-order predicate calculus to formulate a number system and to actually get to the decision problem from here.
Maybe that can be a subject for a second post later.

Turing Completeness

As mentioned before, this terminology was coined later and was never used by Turing himself. In Computability Theory, a system is said to be Turing Complete if it can be used to simulate any Turing machine. It implies that the system is theoretically capable of expressing all tasks accomplishable by computers. Most general-purpose programming languages are Turing Complete.

Turing and Lambda Calculus

This book brought a lot of clarity to the relationship between Lambda Calculus and Turing. Before that, for some reason, I had always assumed Lambda Calculus to be more recent than Turing.

In reality, Alonzo Church, creator of Lambda Calculus beat Turing by a very close margin in solving the decision problem. Church’s paper A Note on the Enscheidungsproblem which also proved that the decision problem was unsolvable was received by The Journal of Symbolic Logic on April 15, 1936, six weeks before Turing’s submission to the London Mathematical Society on May 28, 1936!

Turing then had to publish a 3-page appendix to his paper (in August 1936) proving that his notion of computability and Church’s notion of effective calculability are both effectively equal. Alonzo Church had got his Ph.D in mathematics from Princeton and had taught there as a professor from 1929 to 1967. In the fall of 1936, Turing went to Princeton to get his Ph.D under Alonzo Church. Among his students, Church had other famous mathematicians such as Stephen Kleene and Martin Davis who expanded on both Turing’s and Church’s works later.
Lambda Calculus has had a direct influence on functional programming languages such as Lisp, Haskell, Scheme, Clojure etc.


Intro to High Performance Computing – Experience

This Spring, I completed the Intro to High Performance Computing course (CSE 6220) as part of OMSCS. It is one of the hardest (4.5/5) and also the highest rated (4.8/5) course in the program as per OMS Central. Based on my experience,  I concur with both ratings.

At a high level, the course covers the algorithmic aspects of maximizing the performance of your code. This includes things like parallelizing your code across all processors or across multiple machines, exploiting the memory hierarchy to your advantage etc. The other ‘high performance’ course in the program – High Performance Computer Architectures (CS 6290), in contrast, discusses maximizing performance more at a processor architecture level.

Prof. Vuduc requires special mention. He has put a lot of effort in making the course videos easy-to-understand and interesting. His hilarious antics make you laugh even while discussing the most complex topics. He is also very active in Piazza and participates in the office hours regularly.

There were 5 hands-on projects in total, all in C/C++, with one due every two weeks. These were really the time-sinks. Interestingly, these were also the most fun part of the course in my experience. These involved implementing the algorithms taught in the lectures, making everything you learn more ‘real’.

Key concepts

At a broad level, these were the key concepts I learned from the course:

  1. Shared memory model (aka dynamic multithreading model)
    1. Concepts of work and span (link) in analyzing parallel programs.
    2. Introduction to OpenMP library.
  2. Distributed memory models
    1. Parallel computing across network using message passing.
    2. The Alpha-Beta model (aka latency & inverse-bandwidth model) for analyzing distributed parallel programs.
    3. Introduction to OpenMPI library.
  3. Two level memory model
    1. I/O aware algorithms that can exploit the cache and main memory structures.
    2. Cache oblivious algorithms that still achieve optimal performance without being aware of the cache/memory structures.

The Alpha-Beta model for measuring the cost of distributed computing

For a more detailed overview, I would recommend going over the Course Syllabus (PDF).

Overall, I really enjoyed the course and am happy that I decided to take it though it was not directly related to my specialization (Interactive Intelligence).

Hyperledger Sawtooth – Introductory tutorial

Note: This blog post was originally written on April 2018 and depends on sawtooth-core v1.1.1. The instructions mentioned may not work for newer versions of Sawtooth.

In this blog post, I will try to quickly cover Hyperledger Sawtooth basics and explain how to get to a productive development setup starting from the Sawtooth core codebase.

Hyperledger is an open-source collaborative effort to create industry standard blockchain technologies. It is backed by the Linux Foundation. You can read more about the effort here –

This edX course would be a good starting place for anyone interested in Hyperledger –

Sawtooth is one of the mature projects (1.0 released in Jan 2018) within Hyperledger, initially contributed by Intel. It is designed with Enterprise use-cases in mind. Among other things, Sawtooth introduces a novel consensus algorithm called Proof of Elapsed Time (PoET), has support for permissioned private blockchains, parallel transaction execution, dynamic switching of consensus algorithms, compatibility with Ethereum (via Seth transaction family) etc. It also has development support for a broad language base including Python, Go, Java etc.

Why this post?

Sawtooth is one of the most well documented piece of open-source software that I’ve come across.  I would recommend anyone interested in learning Sawtooth to go through it – link.

Having said that, I feel that the level of information in the docs can be a bit daunting, especially for someone new to blockchain. Also, in my case, the fact that I wasn’t very comfortable with docker added an additional layer of indirection/complexity in the learning process. I need to be able to see how something starts up beginning from the code to fully understand how it works.

I couldn’t find any tutorials on Sawtooth outside the documentation as of today. I reached out to the friendly experts hanging out in the  Sawtooth Rocket chat channel to clear many issues I came across while trying to play with Sawtooth. This post is my effort to give back to the community, in the form of a basic intro to setting up sawtooth from scratch.


Sawtooth components

Transaction Processor

The transaction processor maintains all domain specific business logic for a transaction family (similar to smart contracts) such as:

  1. Serializing/Deserializing the state stored in the ledger to domain objects.
  2. Providing the business logic for validating transactions


The validator handles consensus, all blockchain read/writes, peer interaction etc. It is agnostic of the business logic. In fact, one validator process running in a host can be connected to multiple transaction processor processes.


Clients (CLI, web services etc.) interact with a transaction processor via a REST API (a separate process).

If you are interested in playing around with Sawtooth using pre-built images, the installation section of Sawtooth would be helpful – link. You can use docker images, Ubuntu packages or install Sawtooth from the AWS marketplace.

Getting your hands dirty

In this section, I will try to get a working instance of Intkey transaction processor (TP) running locally in Python. Intkey is a sample TP for setting, incrementing and decrementing values in a dictionary. First of all, checkout the sawtooth-core code-base

git clone

We will still use a provided docker container for running all our processes to save us the trouble of getting the right version of all the dependencies required. In our case, we will use the sawtooth-dev-python image. Also, we will build and use the validator and REST API directly (not from code) as these don’t need to be changed by application developers.

Setup the validator

Build everything (install docker if you haven’t):

$ cd sawtooth-core/docker
$ docker build . -f sawtooth-dev-python -t sawtooth-dev-python

Now if you list your docker images, you should see the newly created one:

$ docker images
sawtooth-dev-python latest ecf3ae086754 2 days ago 817MB

Next, let’s start a container and get inside it:

$ cd sawtooth-core
$ docker run -v $(pwd):/project/sawtooth-core -it sawtooth-dev-python bash

This should start a container using the sawtooth-dev-python image and get you to a root prompt prompt within the container with our present working directory (sawtooth-core) mounted as /project/sawtooth-core.

root@43edef3881be:/project/sawtooth-core# ls

Before we start up the validator, we need to do some setup here. Basically create the genesis block and required keys.

# sawtooth keygen
# sawset genesis
# sawadm genesis config-genesis.batch

Next, let’s start up the validator:

# sawadm keygen
# cd bin/
# sawtooth-validator -vv

You would want to keep the validator running for the rest of the steps to work. For that, you will need to connect to the container from a different terminal. For that, you need to find the container id for the container that you just created:

$ docker container ls
43edef3881be sawtooth-dev-python "bash" 8 days ago Up 8 days 4004/tcp, 8008/tcp keen_keller

Now, to get into the container from this new terminal, you just need to do the following:

$ docker exec -it 43edef3881be bash

Now, we need to start up the settings transaction processor which is required for the validator to function correctly:

# cd bin/
# settings-tp -vv

You should see some logging in the validator terminal tab/window along these lines now:

[2018-04-12 07:25:27.657 INFO processor_handlers] registered transaction processor: connection_id=f519f4ff0bd1e26968d5bcd76cd71eed88d097c5a4846798c35f5d9c5efeb8845ea65fdf2172c12c7b4a226fc4214c18f6989c4048e0012c2fb5378252d67a08, family=sawtooth_settings, version=1.0, namespaces=['000000'], max_occupancy=10
[2018-04-12 07:25:27.705 INFO genesis] Genesis block created: 69f85216c3137fdc0ee31f8d754dfd134198b6d97f31c45e52ccbb3af356bbbe443aa38ba6a3380a7aab734b939881dbb198367311423b0b8d9aa39569d186eb (block_num:0, state:2cd74660fc59b472699aad3f6da884ae636191673e6773e6d81b9c8987065e9f, previous_block_id:0000000000000000)
[2018-04-12 07:25:27.709 INFO interconnect] Listening on tcp://
[2018-04-12 07:25:27.712 INFO chain] Chain controller initialized with chain head: 69f85216c3137fdc0ee31f8d754dfd134198b6d97f31c45e52ccbb3af356bbbe443aa38ba6a3380a7aab734b939881dbb198367311423b0b8d9aa39569d186eb (block_num:0, state:2cd74660fc59b472699aad3f6da884ae636191673e6773e6d81b9c8987065e9f, previous_block_id:0000000000000000)

There is one more piece required before we can get to our Intkey transaction processor – the REST API. Again, from a new terminal, get into the container and run:

# cd bin/
# sawtooth-rest-api -vv

At this point, you have a functional sawtooth validator. You can play around with it using the sawtooth command. Some examples:

# sawtooth state list 
000000a87cb5eafdcca6a8cde0fb0dec1400c5ab274474a6aa82c12840f169a04216b7 110 b'\n...
HEAD BLOCK: "69f85216c3137fdc0ee31f8d754dfd134198b6d97f31c45e52ccbb3af356bbbe443aa38ba6a3380a7aab734b939881dbb198367311423b0b8d9aa39569d186eb"
# sawtooth transaction list
5964642adb957ca994f5789e9a5f9930853c20359a8adbc5c18ecf4b338fc9a00f544f42cf17d7cdc79531727fbdb307f4143c8104ec9ed0e5c3557a476cccdb sawtooth_settings 1.0 131 b'\x08\...

You can also directly query the REST API as follows:

# curl http://localhost:8008/blocks
 "data": [
 "batches": [
 "header": {
 "signer_public_key": "0317f01f8958bc6d404d1ffe88770fe927fef63022216f24484906760873501d7f",
 "transaction_ids": [
 }.. // trimmed to keep it short


The Transaction Processor

You’ll notice that pre-build Intkey transaction processor binaries are also available in the bin folder. However let’s try to run it from directly from the code. The (code link) in the following directory actually starts the example Intkey processor:


We need python3 for some of the dependencies. But if you directly try to run from python, you will run into couple of issues. You’ll notice it depends on the sawtooth_sdk module, which in turn depends on sawtooth_signing module. Lets build them first.

// First, lets install sawtooth_signing
# cd /project/sawtooth-core/signing/
# python3 install

// Next, lets install sawtooth_sdk
# cd /project/sawtooth-core/sdk/python/sawtooth_sdk
# python3 install

Now, we need to tweak the file a bit to work in run-from-source approach.

It tries to import from sawtooth_intkey.processor.handler module, which it will fail to find. The is however right besides the in the same directory. So lets modify to use that:

- from sawtooth_intkey.processor.handler import IntkeyTransactionHandler
+ from handler import IntkeyTransactionHandler

At this point, we should be able to run without any errors. But it will exit without doing anything. Thats because no-one is calling the main method. Let’s add a bit of code at the end of the file to do that:

if __name__ == "__main__":

Notice that you can modify the file in your host machine itself i.e outside the container. Since the sawtooth-core folder is shared with the container, it will be automatically up-to date with the latest code.

Now, your Intkey transaction processor is ready to roll! You can try some of the commands as follows:

# python3 -V
sawtooth-intkey (Hyperledger Sawtooth) version UNKNOWN

Here is how you really start it:

# python3 -vv
[2018-04-12 08:09:31.254 INFO core] register attempt: OK

You should see the validator also logging in its terminal the fact that intkey TP connected successfully. Something like:

[2018-04-12 08:09:31.248 INFO processor_handlers] registered transaction processor: connection_id=acf1ed62db9a2e18c88af958532ac7c9857d6db50d951348eabda8ba5e0913e9a4deeb3fa607cbf6897efdc0585559ea38470cfbbe96141668369db16d3f45a5, family=intkey, version=1.0, namespaces=['1cf126'], max_occupancy=10

Intkey CLI

Let’s try running the CLI to interact with the Intkey transaction processor.

You will notice the provided file references other files in its directory using path. For it to be able to find these files correctly, lets copy it to the appropriate location:

# cd /project/sawtooth-core/sdk/examples/intkey_python/
# cp sawtooth_intkey/client_cli/ .

Next, similar to the earlier, we need to add an entry point for the Add the following snippet at the end of the file:

if __name__ == '__main__':

This completes our setup! Let’s play around with it:

# python3 --help
usage: [-h] [-v] [-V]

optional arguments:
 -h, --help show this help message and exit
 -v, --verbose enable more verbose output
 -V, --version display version information

 set Sets an intkey value
 inc Increments an intkey value
 dec Decrements an intkey value
 show Displays the specified intkey value
 list Displays all intkey values

If you list all the entries in the blockchain now, it should be empty (as we did not insert any):

# python3 list

Now, sets try setting and getting a key-value pair:

// Set key 'a' to value 10
# python3 set a 10
 "link": ""

//Let's try listing again
# python3 list
a: 10

// Let's try reading the key
# python3 show a
a: 10

// Let's try incrementing 'a' by 2
# python3 inc a 2
 "link": ""

// Show again
# python3 show a
a: 12

You can also do batch import of random keys (for testing, measuring throughput etc.) as follows:

// Create 1 batch file with operations on 5 keys
# python3 create_batch -K 5 -c 1
Writing to batches.intkey...

// Load the batch file
# python3 load
batches: 2 batch/sec: 72.39234705765597

// List the added keys
# python3 list
a: 12
ZmyetF: 30086
ReocMV: 59247
CINQSf: 57819
BWADZo: 39267
RoDdEV: 47475

You should be able to see the corresponding logging for each of these commands in the REST API, transaction processor and validator terminal consoles.

In summary, we started by checking out the sawtooth-core codebase and ended with a docker container running the following:

  1. Validator
  2. Settings Transaction Processor
  4. Intkey Transaction Processor (from source)
  5. Intkey CLI (from source)

This concludes my hands-on overview of working with Hyperledger Sawtooth.

I hope this will be helpful as a good starting place for developers trying to create their own custom transaction families in Sawtooth. In case you run into any issues while following these instructions or have any questions, please let me know. I’ll try to help you out. As I mentioned earlier, the Sawtooth Rocket chat is a good place to get help from experts in case you are stuck.

Spacemacs – Among the stars aboard the Evil flagship (vim)

Screen Shot 2018-04-09 at 6.23.06 PM

I’ve been using Spacemacs as by IDE/editor of choice for about half a year now. I absolutely love it! It’s basic premise is that

“The best editor is neither Emacs nor Vim, it’s Emacs and Vim!”.

In this post, I will try to explain briefly what it is and how I use it in a way that is newbie friendly.


I’ve been programming for over 10 years now, professionally for about 4 years. I’ve worked in a variety of different languages over time – HTML/CSS/JS, C/C++, PHP, Python, Java and most recently Clojure.

Till now, I did not have a consistent IDE/Editor. For web development, I used to use Sublime and later Atom. For Python, PyCharm. For Java, I used to use Eclipse and switched to IntelliJ later (I still use IntelliJ + IdeaVIM for Java as I will explain later). I used to use basic Emacs in college for C/C++ projects and basic Vim at work whenever I needed to quickly make small changes to files when sshed into servers.

The original reason I made the switch to Spacemacs was because I couldn’t find a good IDE for Clojure elsewhere. CounterClockwise, the Eclipse Clojure plugin is not being maintained actively. Cursive, the IntelliJ plugin is paid. The most recommended/actively developed/open-source IDE for Clojure Cider (Clojure(Script) Interactive Development Environment that Rocks) was available only for Emacs and its derivatives such as Spacemacs.

Clojure development using CIDER, complete with a fully featured REPL having auto-complete and doc-string tooltips.

However, now Spacemacs in Evil (Vim) mode has become my IDE/editor of choice for most languages and for any editing outside programming as well. Spacemacs makes it a lot easier to get to a really productive state without having to do a lot of research online, tuning your dotfiles, installing plugins etc – it comes with batteries included!

The Vim Modal editing

Spacemacs supports 3 modes of usage:

  1. Among the stars aboard the Evil flagship (vim)
  2. On the planet Emacs in the Holy control tower (emacs)
  3. A hybrid mode combining 1 and 2

If you are new to both Emacs and Vim, I strongly recommend using Spacemacs in Evil (Vim) mode. Emacs/Hybrid mode needs to be considered only if you are already comfortable with the Emacs keybindings.

Vim or its modal editing lets you edit text at the Speed of Thought. Many people argue that coding is more about thinking and less about churning out code, and that the minimal time savings you get from optimizations such as Vim is negligible. I disagree:

  1. Vim helps you make editing as least intrusive to your thinking as possible
  2. The small savings add up across your profession as a coder.
  3. As a programmer, having a hackable/fully-programmable editor gives you the flexibility to program your editor as per your needs.

Sylvain Benner, the creator of Spacemacs has written an excellent short piece on ‘Modal editing – why and how‘ that I highly recommend. In fact, I recommend going through that whole article (Beginner tutorial) if you are new to Spacemacs.

Vim itself has a steep initial learning curve. But it pays off greatly after the first few weeks if you write code as a profession. Some helpful/interesting Vim resources to get you started:

  1. Why Vim? –
  2. Vim interactive tutorial –
  3. Vim adventures (interactive game) –
  4. Vim Creep (story) –
  5. Vim cheatsheet –

There are lot of good video tutorials on Vim in Youtube too if that’s what you prefer. Here is one I liked – Learning Vim in a week (link).

In short, Vim lets you edit text similar to how you think about editing text. Examples:

  1. Typing ciw lets you change the current word (change in word)
  2. Typing di” lets you delete everything within two double-quotes (delete in “)
  3. Typing dt> lets you delete everything till > (delete till >)

You get the idea. These are executed in the normal mode, where you will spend most time. There is much to be said about the composability of vim commands too (some other time).

Pro tip: It is very highly recommended that you remap your Control (Ctrl) key to the Capslock key once you start using Vim. You won’t need to move your fingers from home row much with this change. For Mac, I used Karabiner to remap the keys.

Why Spacemacs?

Emacs,  “a great operating system, lacking only a decent editor” –

At the same time, Vim is said to suffer from a less than ideal scripting language and window management capabilities. Spacemacs gives you the best of both worlds.

Two primary reasons which I went with Spacemacs were:

  1. Batteries included
  2. Mnemonics

Batteries included

This was the biggest factor for me. I’ve tried to transition into using Vim in the past. To get to a really productive dev environment, you need to spend significant time going through various forums, identifying the plugins you need, installing them using some package manager like Vundle etc. This can be quite daunting for a newcomer.

Spacemacs has this concept of layers. For example, each language has its own layer. When you install a layer, Spacemacs installs the best community-curated plugins for that language for you. Even better, it auto-detects which layer to install when you open a new type of file and asks you whether you want to install it. A simple ‘y’ and you’re set to go!

The layers that I have installed as of April 2018:

 (auto-completion :variables
                   auto-completion-enable-help-tooltip t
                   auto-completion-enable-snippets-in-popup t)
 (shell :variables
        shell-default-height 30
        shell-default-position 'bottom)
 ;; spell-checking


Another issue in that I had faced in the past was learning and memorizing all the keyboard shortcuts. Vim/Emacs has evolved over more than 20 years now. Different plugins were created at different points of time. Hence there is no consistency or logical grouping for these.

Spacemacs solves that by taking all the plugins/shortcuts and logically grouping them by mnemonics. Spacebar is the leader key (main key for issuing any command). That itself is an improvement over other editors which strains your little fingers and can result in carpal tunnel syndrome (CTS).

To give some example of how mnemonics work:

  1. SPC w gives you all the window related commands.
    1. SPC w d deletes a window
    2. SPC w m maximizes a window
  2. SPC b gives you all buffer related commands.
  3. SPC g gives you all git related commands.

Spacemacs comes with the which-key plugin which shows a helpful menu whenever you press SPC so that you don’t need to memorize all the shortcuts.

Spacemacs which-keys plugin helps find what you are looking for

If you are interested in learning more about Spacemacs, I highly recommend the Spacemacs ABC tutorial series by Eivind Fonn – Youtube playlist link. It is a series of 8 videos, in each of which Eivind explores one ore more keys under SPC in detail.

Some cool stuff

These are some of the cool features that made the whole experience more fun.


Magit is a plugin for git version control. Till now, I generally used to prefer command line git for all version control use-cases. But the 3 window Ediff merging capability of Magit is something that sold me completely. Resolving merge conflicts were never so much fun!

3 window Ediff merging using Magit. Courtesy: Magit Ediff tutorial

Undo Tree

Shortcut: SPC a u

Undo tree lets you browse through all past changes

With most modern editors, we are used to a linear timeline of changes. If you go back to your past change and modify something, you effectively lose history of all the things you did after that change. With the undo tree, your entire history is preserved and you can easily navigate to any point in time including any branching in history you did.

Vim Macros

You can record and replay commands using Macros. A good introduction here – link. This can be pretty powerful if you need to do something repetitive in a smart way.

Shortcut to record: q<letter><commands>q
Shortcut to replay: <number>@<letter>

You might think these are pretty esoteric tools and of not much practical use. I too though so until I came across multiple situations where I was able to use them effectively.

Vimium (Bonus)

Once you get to using Vim, you will resent having to take your hands off the home row (where hjkl keys are situated) at any time. Moving your hands to the trackpad could feel like too much work.

Worry not, Vimium comes to the rescue –
Its a plugin for Chrome (you have Vimari for Safari, not as feature-rich as Vimium) that lets you navigate the web using your keyboard, mostly with keys in the home-row.

Vimium lets you navigate the web using your keyboard

Being Pragmatic

Even though I’ve talked so much about Spacemacs, I have realized that it might just not be the right tool for some cases. In particular, for me, it was not working for Java development.

I tried all the options available in Spacemacs. Eclim, which comes with the Java layer, essentially runs a headless eclipse process in the backend. However, it is pretty slow. Other options like malabar, jdee or meghanada were either outdated, not being actively developed or not working. It’s difficult to write Java without proper auto-complete or go-to-definition etc. features.

I found that IntelliJ team offers an IdeaVim plugin that lets you use IntelliJ with Vim keybindings. This along with the distraction free mode (Cmd + I) gave me a near similar experience to Spacemacs, but with all the Java goodness.

I am still learning a lot in both Vim and Spacemacs. I hope to update this post with more interesting/useful features that I uncover over time.

Artificial Intelligence – Experience

I recently completed the Artificial Intelligence course (CS 6601) as part of OMSCS Fall 2017. The course gives an good overview of the different key areas within AI. Having taken Knowledge Based AI (CS 7637), AI for Robotics (CS 8803-001), Machine Learning (CS 7641) and Reinforcement Learning (CS 8803-003) before, I must say that the AI course syllabus had significant overlap in many areas with these courses (which is expected). However, I felt the course was still worthwhile since Prof. Thad taught these topics in his own perspective, which made me look at these topics in a different light. Prof. Thad also tried his best to make the course content interesting and humorous, which I really appreciated.

Course Outline

  1. Game Playing – Iterative Deepening, MinMax trees, Alpha Beta Pruning etc.
  2. Search – Uniform Cost Search, Bidirectional UCS, A*, Bidirectional A* etc.
  3. Simulated Annealing – Hill Climbing, Random restarts, Simulated Annealing, Genetic Algorithms etc.
  4. Constraint Satisfaction – Node, Arc and Path consistency, Backtracking, forward checking etc.
  5. Probability – Bayes Rule, Bayes Nets basics, Dependence etc.
  6. Bayes Nets – Conditional Independence, Cofounding cause, Explaining Away, D Separation, Gibbs Sampling, Monty Hall Problem etc.
  7. Machine Learning – kNN, Expectation Maximization, Decision Trees, Random forests, Boosting, Neural nets etc.
  8. Pattern Recognition through Time – Dynamic Time Warping, Sakoe Chiba bounds, Hidden Markov Models, Viterbi Trellis etc.
  9. Logic and Planning – Propositional Logic, Classic planning, Situation Calculus etc.
  10. Planning under Uncertainty – Markov Decision Processes (MDPs), Value iteration, Policy iteration, POMDPs etc.

The course used the classic textbook in AI –  Artificial Intelligence – A Modern Approach (3rd Edition) by Peter Norvig and Stuart Russell. Some chapters (such as Logic and Planning) was taught by Peter Norvig himself whereas few others were taught by Sebastian Thrun. There is no arguing that the course was taught by the industry best.

Screen Shot 2018-01-07 at 6.30.34 PM
The iconic cover of Artificial Intelligence: A Modern Approach

There were 6 assignments (almost one every alternate week) which required proper understanding of the course material and decent amount of coding (in Python). There was an open book midterm and final exam as well. Even though these were open book, these involved significant amount of work (researching and rereading the text, on paper calculations etc.). Overall, completing these forces one to really understand the concepts, which I really liked.

Summary Stats

  1. Average time spend per week – approx. 20 hours (including whole weekends on assignment due weeks)
  2. Difficulty (out of 5) – 4.25 (which is what I would rate ML too, and these two would top my list)
  3. Rating – 4/5