I completed Big Data Analytics for Healthcare (CSE 6250) as part of Spring 2019 of my OMSCS program – my last semester! It was ranked as the hardest course in the program as per OMSCentral, a befitting finale to my 3.5 year long OMSCS journey.
As some might think, I did not choose to take the course because I am a masochist. Rather, the course seemed to cover a lot of ground that is not explored by any other courses in the program. In particular, I was interested in learning more about:
Hadoop, Pig, Hive, MapReduce
Spark (RDDs, Spark SQL, Mlib, GraphX etc.)
For me, the last reason was one of the major considerations. I consider myself a programming language geek and wouldn’t miss a chance to play around with a new programming language if I can. I must say I was in for a treat. I got to write some significant code in Scala. The sample code, template etc. that they provided for the projects were leveraging some of the nice parts of Scala such as type inference, case classes, pattern matching etc. All this along with its functional nature and full-fledged REPL made Scala a delight to work with.
The course was pretty well designed overall. The well maintained self-paced labs available for almost all topics were really helpful in getting up to speed. Check out their Scala lab to get a feel – link. The course videos were well planned. The TAs were pretty responsive in Piazza. All the good stuff.
The final project was the highlight of the course. It was a group project, worth 40% of the overall grade. Teams were allowed to pick from of a variety of interesting topics. The project milestones were structured in a way similar to how someone would go about pitching a data-science project in a corporate setting (say to your CEO), execute it and come up with a report and presentation summarizing that work.
I was fortunate to get a really good team. We were four. We chose the domain NLP for Healthcare. After exploring a variety of suggested ideas in the domain, we finalized on the following topic: Hierarchical Ensembles of Heterogeneous Models for Prediction of Medical Codes from Clinical Text. We worked well together and did a good amount of research and implementation. We were able to try out various ensemble ML models combining SVMs, DNNs etc as part of the project.
While there were homeworks to keep you busy on most weekends, I felt that the notoriety of the course in terms of difficulty was unfounded. If you go in with an open mind, excited about a fast paced journey across the various big-data tools/technologies being used for data-science in the industry, you won’t be disappointed.
On related news, I got my Georgia Tech Masters degree in my email couple of weeks back! It was truly a moment of happiness. I still need to figure out what to do with all the extra time now that the course is over 🙂
I completed Graduate Algorithms course (CS 8803) as part of my OMSCS program during Fall 2018. It was an interesting and really informative course.
The main topics covered in the course include dynamic programming; divide and conquer, including FFT; randomized algorithms, including RSA cryptosystem and hashing using Bloom filters; graph algorithms; max-flow algorithms; linear programming; and NP-completeness.
Of these, I had not studied FFTs and max-flow algorithms during my undergrad. Also, though I had studied basic DP (dynamic programming) in my undergrad (KnapSack, Matrix multiplication etc.) and had prepared some more for tech interviews, I had never really had a rigorous formal training on DP before. This course had sufficient course work (home-works and exams) with the focus on building that DP intuition that I really liked.
The course text was Algorithms by Dasgupta, Papadimitriou and Vazirani. It is a really good, concise introduction to most advanced algorithm topics. It is especially good as a textbook for colleges because you can realistically expect students to read from cover to cover unlike say Introductions to Algorithms by Cormen, which is better suited as a long term reference manual. My one gripe with the course was that it did not cover the last chapter from Dasgupta, on Quantum Computing.
The course grading breakdown was as follows:
Midterms: (two) 25% each
Final exam: 35%
The midterms and final exam were closed book 3-hour exams. The bulk of the grade (85%) was from these. There were 8 home-works, spaced roughly once a week which added up to just 5%. These usually involve 4-5 questions which would require you to write your solution in pseudo-code (only in case of DP) or in plain English. One might think that doing the home-works is just not worth the effort. However, I cannot over emphasize how important and helpful to the exams these were.
Another thing that I am ever grateful to the homeworks is for introducing me to Latex. Till now, though I had heard that Latex is pretty good for writing technical documents, I never really had a good use-case or forcing function to make me learn it. The HWs usually involved formulas, bigO complexities, pseudo-code, matrices etc. This gave me a really good opportunity to learn Latex. Its amazing!
While we are on the topic of Latex, if it is something that interests you, may I suggest the Emacs plugin for Latex? I used the Spacemacs latex layer. Among other things, it provides previewing the rendered doc in Emacs itself (SPC m p p)!
The TAs and the Professor were prompt on answering queries on Piazza and office hours. The grading turn-around times were also really fast. Overall, I really enjoyed the course. My ratings:
The first part of the course was mainly about getting familiar with Numpy and Pandas. The key take-away was how small differences (optimizations) in how you handle large arrays using Numpy can lead to substantial performance gains.
For me, the second part was the most enlightening. It introduced the various concepts within the Stock market. This was something I’ve always wanted to and in-fact have been learning on an ad-hoc need-to-know basis from Wikipedia, Investopedia etc. However, I’ve always wanted a more formal and comprehensive introduction to these concepts. Some of the ones discussed include:
Going long or short with shares
Capital Assets Pricing Model (CAPM)
Efficient Markets Hypothesis
The Fundamental Law of active portfolio management
Arbitrage Pricing Theory
Technical analysis (Bollinger Bands, Sharpe ratio etc.)
The third part was about using different ML techniques (supervised – Regression, Decision Trees, unsupervised – Q-learning etc) to predict future stock prices based on past data. I was familiar with most of these techniques from previous AI/ML courses I had taken. However, using those with time-series data in this setup was something new.
There were five (4 proper and 1 intro) projects in this course and 2 closed-book exams. The professor also recommended some good/interesting external resources for students to understand the stock market better. The movie The Big Short was even part of the syllabus. The course was well-paced and well organized. My ratings:
I recently read the book The Annotated Turing by Charles Petzold. It was really enlightening. Though all computer science graduates study concepts like Turing machines, Turing Completeness etc. as part of their courses in colleges, questions such as how and why did Turing come up with these concepts is not taught usually (spoiler alert: Turing did not come up with either terminology that I just mentioned). Charles does a good job of making Turing’s classical 1937 paper more accessible without dumbing it down too much. He also gives a lot of historical context as well as background knowledge that Turing assumes his readers to have.
The book is aptly sub-titled A Guided Tour through Alan Turing’s Historic Paper on Computability and the Turing Machine. In it, Turing’s paper is shown as is, but is supplemented with the author’s own annotations in between. In this post, I will try to present a very concise overview of the key takeaways from the book.
Das Entscheidungs problem
Turing’s actual 1937 paper was titled On Computable Numbers, With An Application To The Entscheidungsproblem. To understand the paper, we need to understand what that problem is. Entshcheidung means decision or decidability. In 1900, during his address at the Second International Congress of Mathematicians, one of the prominent mathematicians at the time, David Hilbert, presented 23 questions as challenges for mathematicians to solve in the next century. The 10th problem in the list was this:
10. Determination of the Solvability of a Diophantine Equation
Diophantus was a Greek mathematician who is estimated to have lived around 250 CE and wrote a book called Arithmetica. Diophantine equation has come to mean an algebraic equation in which only whole number solutions are allowed.
Given such a problem, Hilbert was interested in a process with finite number of steps to determine whether the problem has a solution or not i.e. is the problem solvable? This problem came to be known as the Entscheidungsproblem or decision problem. Note, Hilbert is not interested in a general method to solve Diophantine equations here.
Turing proves that no such general decision process exists.
Gödel’s Incompleteness Theorem
As Turing himself remarks in the paper, his results are superficially similar to the famous Incompleteness Theorem by Kurt Gödel from 1931. However, he clarifies the difference between both results.
Gödel essentially shows that any consistent formal system is necessarily incomplete. We can find propositions P such that neither P nor ~P (negation of P) is provable within that system.
Till that time, mathematicians were attempting to create flawless systems which were both complete and consistent. Gödel’s results proved that such attempts were in vain.
Even with these results, a general decision process as described by Hilbert could conceivably exist. Given an unprovable proposition, it would identify it as unprovable. It would similarly identify provable propositions also correctly. Such a decision process is what Turing shows as cannot exist.
Turing’s approach leverages Predicate Calculus extensively. I will not try to explain the details of Turing’s proof here (for that, you need to read the book), but I will try to summarize the key ideas that he builds up and uses in his proof.
Computing Machine (M): is a machine which
is supplied with a tape running through it, divided into sections (squares) capable of bearing a symbol each.
whose motion at each stage is completely defined by its configuration.
and can perform one of the following operations at any moment:
read one symbol
print one of two kinds of symbols onto a square:
figures (either 0 or 1)
change the square being scanned (by shifting left or right)
Here configuration is what tells the machine what action to perform when it encounters a symbol. Henceforth, I’ll refer to Computing Machines as just machines.
Circular and Circle free machines: If a machine never writes more than a finite number of symbols of the first kind (0 or 1), it is called circular. Otherwise, it is circle-free. This notion becomes one of the most crucial aspects of Turing’s proof later.
Notice that the terminology here is a bit counter-intuitive. In Turing’s formulation, a machine which does not stops is considered good (circle-free) whereas one which stops after finite steps is bad (circular).
Computable Sequences: A sequence is said to be computable if it can be computed by a circle-free machine. Turing shows that his machines are capable of computing all sequences that we generally consider as computable (such as pi).
Description Number (D.N): Turing shows how all machines can be represented using a standard form such as:
q1 S0 S1 R q2; q2 S0 S0 R q3; q3 S0 S2 R q4; q4 S0 S0 R q1;
Here the first section (separated by a ; ) => q1 S0 S1 R q2;
can be interpreted as => From q1 state, if the machine reads symbol S0, print the symbol S1, shift right and transition to state q2.
He modifies (serializes) the form to a notation that can be more easily parsed by another machine. This notation is called the Standard Description (S.D) of the machine. S.D for the above machine for eg:
DADDCRDAA; DAADDRDAAA; DAAADDCCRDAAAA; DAAAADDRDA
He further says each such S.D can be represented by a number (by replacing A with 1, C with 2 etc.). This is the Description Number (D.N) for that machine. Note that S.D and D.N can be used inter changeably as you can always convert from to the other.
Universal Machine (U): This is one of the most brilliant insights of Alan Turing. Once he shows that any machine can be represented as a number, he creates a machine which can take the Description Number of any other machine M as input and prints the same output as M. This can be considered as the first ‘general-purpose’ computer ever conceived.
In the next section, I will describe how Turing uses this universal machine to prove that the decision problem cannot be solved. However, it is worth appreciating that we remember Turing in the present more for his conception of Computing Machines (later known as Turing Machines), the Universal Machine and its limitations rather than for proving the Decision Problem. This is one of the cases in history where the journey contributed more than the destination (end goal).
The high level approach that Turing takes to solve the decision problem can be outlined as follows:
Each of these steps might seem like a logical jump from the previous one, but they do follow from the previous ones consistently. However, there is a lot of tedious details to the representation of these statements in predicate calculus and hence in the proofs.
I will try to present an brief explanation of the first of these four steps. However, by definition, this will not be rigorous or complete.
No general process exists to determine whether a machine is circle free
Turing proves this by reductio ad absurdum or proof by contradiction. First, he assumes that there exists a machine D which given the S.D of any machine M, tests this S.D and if M is circular, outputs “u” (unsatisfactory) and if it is circle-free, outputs “s” (satisfactory) in a finite number of steps.
If M is circle-free, we can give the S.D of M to the Universal Machine (U) to compute the output of M. Turing combines D and U to create a machine H which works as follows:
Here R(N) keeps track of the number of valid circle-free machines encountered so far.
If you follow along the flow-chart, you’ll see that the Nth digit of H‘s output is the Nth digit in the output of a Nth valid circle-free machine.
For e.g., if you count up from 1:
313,325,317 -> M1 - Prints 0s to the right
3,133,225,317 -> M2 - Prints 1s to the right
The first valid D.N that you hit upon is 313,325,317. Let’s call this machine M1. Its output is an unending sequence (as it is circle-free) of 0s. So the first digit in the output of H will be 0 (the first digit of M1).
Similarly, the second valid D.N that you will find will be 3,133,225,317, for M2. Its output will be an unending sequence of 1s. Hence the second digit in the output of H will be 1 (the second digit of M2).
Here, Turing points out that the machine H has to be circle-free by our construction. It is composed on D and U. D is circle-free by our assumption. U only computes finite digits of outputs for circle-free machines. Hence H, the combination of both has to be circle-free.
Now the problem arises when H reaches its own D.N, let’s call it K. Now, how will D classify K? It cannot be circular (or unsatisfactory, u) as pointed out above. But if it is circle-free, U has to print the R(K)th digit of H.
How many digits would have H printed by the time it reaches K? R(K – 1).
Now, it order to compute its R(K)th (i.e. R(K – 1) + 1) digit, it needs to evaluate itself using U, compute R(K) digits and then print its R(K)th digit. When U evaluates H and reaches the R(K)th digit, it needs to evaluate H again. This leads to an infinite recursion!
By contradiction, Turing proves that there cannot be a general process by which we can determine whether a given S.D is that of a circle-free machine.
This is the crux of Turing’s proof. A variation of this problem is what has come to be known as the Halting Problem (defined by Martin Davis in 1958).
From here, Turing proves the 2nd step by reducing it to the first. Then he proves the 3rd step by reducing it to the 2nd and so on. His machines do not have a notion of algebra or even numbers to begin with. Like I said earlier, there is extensive use of first-order predicate calculus to formulate a number system and to actually get to the decision problem from here.
Maybe that can be a subject for a second post later.
As mentioned before, this terminology was coined later and was never used by Turing himself. In Computability Theory, a system is said to be Turing Complete if it can be used to simulate any Turing machine. It implies that the system is theoretically capable of expressing all tasks accomplishable by computers. Most general-purpose programming languages are Turing Complete.
Turing and Lambda Calculus
This book brought a lot of clarity to the relationship between Lambda Calculus and Turing. Before that, for some reason, I had always assumed Lambda Calculus to be more recent than Turing.
In reality, Alonzo Church, creator of Lambda Calculus beat Turing by a very close margin in solving the decision problem. Church’s paper A Note on the Enscheidungsproblem which also proved that the decision problem was unsolvable was received by The Journal of Symbolic Logic on April 15, 1936, six weeks before Turing’s submission to the London Mathematical Society on May 28, 1936!
Turing then had to publish a 3-page appendix to his paper (in August 1936) proving that his notion of computability and Church’s notion of effective calculability are both effectively equal. Alonzo Church had got his Ph.D in mathematics from Princeton and had taught there as a professor from 1929 to 1967. In the fall of 1936, Turing went to Princeton to get his Ph.D under Alonzo Church. Among his students, Church had other famous mathematicians such as Stephen Kleene and Martin Davis who expanded on both Turing’s and Church’s works later.
Lambda Calculus has had a direct influence on functional programming languages such as Lisp, Haskell, Scheme, Clojure etc.
This Spring, I completed the Intro to High Performance Computing course (CSE 6220) as part of OMSCS. It is one of the hardest (4.5/5) and also the highest rated (4.8/5) course in the program as per OMS Central. Based on my experience, I concur with both ratings.
At a high level, the course covers the algorithmic aspects of maximizing the performance of your code. This includes things like parallelizing your code across all processors or across multiple machines, exploiting the memory hierarchy to your advantage etc. The other ‘high performance’ course in the program – High Performance Computer Architectures (CS 6290), in contrast, discusses maximizing performance more at a processor architecture level.
Prof. Vuduc requires special mention. He has put a lot of effort in making the course videos easy-to-understand and interesting. His hilarious antics make you laugh even while discussing the most complex topics. He is also very active in Piazza and participates in the office hours regularly.
Prof. urging us to commit some key concepts to memory
Prof. admits to being a jerk after asking a really hard question in quiz
Prof. Vuduc introducing ‘tree’ based parallel reduction
There were 5 hands-on projects in total, all in C/C++, with one due every two weeks. These were really the time-sinks. Interestingly, these were also the most fun part of the course in my experience. These involved implementing the algorithms taught in the lectures, making everything you learn more ‘real’.
At a broad level, these were the key concepts I learned from the course:
Shared memory model (aka dynamic multithreading model)
Concepts of work and span (link) in analyzing parallel programs.
Note: This blog post was originally written on April 2018 and depends on sawtooth-core v1.1.1. The instructions mentioned may not work for newer versions of Sawtooth.
In this blog post, I will try to quickly cover Hyperledger Sawtooth basics and explain how to get to a productive development setup starting from the Sawtooth core codebase.
Hyperledger is an open-source collaborative effort to create industry standard blockchain technologies. It is backed by the Linux Foundation. You can read more about the effort here – https://www.hyperledger.org/about.
Sawtooth is one of the mature projects (1.0 released in Jan 2018) within Hyperledger, initially contributed by Intel. It is designed with Enterprise use-cases in mind. Among other things, Sawtooth introduces a novel consensus algorithm called Proof of Elapsed Time (PoET), has support for permissioned private blockchains, parallel transaction execution, dynamic switching of consensus algorithms, compatibility with Ethereum (via Seth transaction family) etc. It also has development support for a broad language base including Python, Go, Java etc.
Why this post?
Sawtooth is one of the most well documented piece of open-source software that I’ve come across. I would recommend anyone interested in learning Sawtooth to go through it – link.
Having said that, I feel that the level of information in the docs can be a bit daunting, especially for someone new to blockchain. Also, in my case, the fact that I wasn’t very comfortable with docker added an additional layer of indirection/complexity in the learning process. I need to be able to see how something starts up beginning from the code to fully understand how it works.
I couldn’t find any tutorials on Sawtooth outside the documentation as of today. I reached out to the friendly experts hanging out in the Sawtooth Rocket chat channel to clear many issues I came across while trying to play with Sawtooth. This post is my effort to give back to the community, in the form of a basic intro to setting up sawtooth from scratch.
The transaction processor maintains all domain specific business logic for a transaction family (similar to smart contracts) such as:
Serializing/Deserializing the state stored in the ledger to domain objects.
Providing the business logic for validating transactions
The validator handles consensus, all blockchain read/writes, peer interaction etc. It is agnostic of the business logic. In fact, one validator process running in a host can be connected to multiple transaction processor processes.
Clients (CLI, web services etc.) interact with a transaction processor via a REST API (a separate process).
If you are interested in playing around with Sawtooth using pre-built images, the installation section of Sawtooth would be helpful – link. You can use docker images, Ubuntu packages or install Sawtooth from the AWS marketplace.
Getting your hands dirty
In this section, I will try to get a working instance of Intkey transaction processor (TP) running locally in Python. Intkey is a sample TP for setting, incrementing and decrementing values in a dictionary. First of all, checkout the sawtooth-core code-base
We will still use a provided docker container for running all our processes to save us the trouble of getting the right version of all the dependencies required. In our case, we will use the sawtooth-dev-python image. Also, we will build and use the validator and REST API directly (not from code) as these don’t need to be changed by application developers.
Now if you list your docker images, you should see the newly created one:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
sawtooth-dev-python latest ecf3ae086754 2 days ago 817MB
Next, let’s start a container and get inside it:
$ cd sawtooth-core
$ docker run -v $(pwd):/project/sawtooth-core -it sawtooth-dev-python bash
This should start a container using the sawtooth-dev-python image and get you to a root prompt prompt within the container with our present working directory (sawtooth-core) mounted as /project/sawtooth-core.
Before we start up the validator, we need to do some setup here. Basically create the genesis block and required keys.
# sawadm keygen
# cd bin/
# sawtooth-validator -vv
You would want to keep the validator running for the rest of the steps to work. For that, you will need to connect to the container from a different terminal. For that, you need to find the container id for the container that you just created:
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
43edef3881be sawtooth-dev-python "bash" 8 days ago Up 8 days 4004/tcp, 8008/tcp keen_keller
Now, to get into the container from this new terminal, you just need to do the following:
You should see some logging in the validator terminal tab/window along these lines now:
[2018-04-12 07:25:27.657 INFO processor_handlers] registered transaction processor: connection_id=f519f4ff0bd1e26968d5bcd76cd71eed88d097c5a4846798c35f5d9c5efeb8845ea65fdf2172c12c7b4a226fc4214c18f6989c4048e0012c2fb5378252d67a08, family=sawtooth_settings, version=1.0, namespaces=['000000'], max_occupancy=10
[2018-04-12 07:25:27.705 INFO genesis] Genesis block created: 69f85216c3137fdc0ee31f8d754dfd134198b6d97f31c45e52ccbb3af356bbbe443aa38ba6a3380a7aab734b939881dbb198367311423b0b8d9aa39569d186eb (block_num:0, state:2cd74660fc59b472699aad3f6da884ae636191673e6773e6d81b9c8987065e9f, previous_block_id:0000000000000000)
[2018-04-12 07:25:27.709 INFO interconnect] Listening on tcp://127.0.0.1:8800
[2018-04-12 07:25:27.712 INFO chain] Chain controller initialized with chain head: 69f85216c3137fdc0ee31f8d754dfd134198b6d97f31c45e52ccbb3af356bbbe443aa38ba6a3380a7aab734b939881dbb198367311423b0b8d9aa39569d186eb (block_num:0, state:2cd74660fc59b472699aad3f6da884ae636191673e6773e6d81b9c8987065e9f, previous_block_id:0000000000000000)
There is one more piece required before we can get to our Intkey transaction processor – the REST API. Again, from a new terminal, get into the container and run:
# cd bin/
# sawtooth-rest-api -vv
At this point, you have a functional sawtooth validator. You can play around with it using the sawtooth command. Some examples:
# sawtooth state list
ADDRESS SIZE DATA
000000a87cb5eafdcca6a8cde0fb0dec1400c5ab274474a6aa82c12840f169a04216b7 110 b'\n...
HEAD BLOCK: "69f85216c3137fdc0ee31f8d754dfd134198b6d97f31c45e52ccbb3af356bbbe443aa38ba6a3380a7aab734b939881dbb198367311423b0b8d9aa39569d186eb"
# sawtooth transaction list
TRANSACTION_ID FAMILY VERS SIZE PAYLOAD
5964642adb957ca994f5789e9a5f9930853c20359a8adbc5c18ecf4b338fc9a00f544f42cf17d7cdc79531727fbdb307f4143c8104ec9ed0e5c3557a476cccdb sawtooth_settings 1.0 131 b'\x08\...
You can also directly query the REST API as follows:
# curl http://localhost:8008/blocks
}.. // trimmed to keep it short
The Transaction Processor
You’ll notice that pre-build Intkey transaction processor binaries are also available in the bin folder. However let’s try to run it from directly from the code. The main.py (code link) in the following directory actually starts the example Intkey processor:
We need python3 for some of the dependencies. But if you directly try to run main.py from python, you will run into couple of issues. You’ll notice it depends on the sawtooth_sdk module, which in turn depends on sawtooth_signing module. Lets build them first.
Now, we need to tweak the main.py file a bit to work in run-from-source approach.
It tries to import from sawtooth_intkey.processor.handler module, which it will fail to find. The handler.py is however right besides the main.py in the same directory. So lets modify main.py to use that:
- from sawtooth_intkey.processor.handler import IntkeyTransactionHandler
+ from handler import IntkeyTransactionHandler
At this point, we should be able to run main.py without any errors. But it will exit without doing anything. Thats because no-one is calling the main method. Let’s add a bit of code at the end of the file to do that:
if __name__ == "__main__":
Notice that you can modify the file in your host machine itself i.e outside the container. Since the sawtooth-core folder is shared with the container, it will be automatically up-to date with the latest code.
Now, your Intkey transaction processor is ready to roll! You can try some of the commands as follows:
# python3 main.py -V
sawtooth-intkey (Hyperledger Sawtooth) version UNKNOWN
Here is how you really start it:
# python3 main.py -vv
[2018-04-12 08:09:31.254 INFO core] register attempt: OK
You should see the validator also logging in its terminal the fact that intkey TP connected successfully. Something like:
Let’s try running the CLI to interact with the Intkey transaction processor.
You will notice the provided intkey_cli.py file references other files in its directory using sawtooth_intkey.client_cli.xyz path. For it to be able to find these files correctly, lets copy it to the appropriate location:
# cd /project/sawtooth-core/sdk/examples/intkey_python/
# cp sawtooth_intkey/client_cli/intkey_cli.py .
Next, similar to the earlier main.py, we need to add an entry point for the intkey_cli.py. Add the following snippet at the end of the file:
if __name__ == '__main__':
This completes our setup! Let’s play around with it:
# python3 intkey_cli.py --help
usage: intkey_cli.py [-h] [-v] [-V]
-h, --help show this help message and exit
-v, --verbose enable more verbose output
-V, --version display version information
set Sets an intkey value
inc Increments an intkey value
dec Decrements an intkey value
show Displays the specified intkey value
list Displays all intkey values
If you list all the entries in the blockchain now, it should be empty (as we did not insert any):
# python3 intkey_cli.py list
Now, sets try setting and getting a key-value pair:
// Set key 'a' to value 10
# python3 intkey_cli.py set a 10
//Let's try listing again
# python3 intkey_cli.py list
// Let's try reading the key
# python3 intkey_cli.py show a
// Let's try incrementing 'a' by 2
# python3 intkey_cli.py inc a 2
// Show again
# python3 intkey_cli.py show a
You can also do batch import of random keys (for testing, measuring throughput etc.) as follows:
// Create 1 batch file with operations on 5 keys
# python3 intkey_cli.py create_batch -K 5 -c 1
Writing to batches.intkey...
// Load the batch file
# python3 intkey_cli.py load
batches: 2 batch/sec: 72.39234705765597
// List the added keys
# python3 intkey_cli.py list
You should be able to see the corresponding logging for each of these commands in the REST API, transaction processor and validator terminal consoles.
In summary, we started by checking out the sawtooth-core codebase and ended with a docker container running the following:
Settings Transaction Processor
Intkey Transaction Processor (from source)
Intkey CLI (from source)
This concludes my hands-on overview of working with Hyperledger Sawtooth.
I hope this will be helpful as a good starting place for developers trying to create their own custom transaction families in Sawtooth. In case you run into any issues while following these instructions or have any questions, please let me know. I’ll try to help you out. As I mentioned earlier, the Sawtooth Rocket chat is a good place to get help from experts in case you are stuck.
I’ve been using Spacemacs as by IDE/editor of choice for about half a year now. I absolutely love it! It’s basic premise is that
“The best editor is neither Emacs nor Vim, it’s Emacs and Vim!”.
In this post, I will try to explain briefly what it is and how I use it in a way that is newbie friendly.
I’ve been programming for over 10 years now, professionally for about 4 years. I’ve worked in a variety of different languages over time – HTML/CSS/JS, C/C++, PHP, Python, Java and most recently Clojure.
Till now, I did not have a consistent IDE/Editor. For web development, I used to use Sublime and later Atom. For Python, PyCharm. For Java, I used to use Eclipse and switched to IntelliJ later (I still use IntelliJ + IdeaVIM for Java as I will explain later). I used to use basic Emacs in college for C/C++ projects and basic Vim at work whenever I needed to quickly make small changes to files when sshed into servers.
The original reason I made the switch to Spacemacs was because I couldn’t find a good IDE for Clojure elsewhere. CounterClockwise, the Eclipse Clojure plugin is not being maintained actively. Cursive, the IntelliJ plugin is paid. The most recommended/actively developed/open-source IDE for Clojure Cider (Clojure(Script) Interactive Development Environment that Rocks) was available only for Emacs and its derivatives such as Spacemacs.
However, now Spacemacs in Evil (Vim) mode has become my IDE/editor of choice for most languages and for any editing outside programming as well. Spacemacs makes it a lot easier to get to a really productive state without having to do a lot of research online, tuning your dotfiles, installing plugins etc – it comes with batteries included!
The Vim Modal editing
Spacemacs supports 3 modes of usage:
Among the stars aboard the Evil flagship (vim)
On the planet Emacs in the Holy control tower (emacs)
A hybrid mode combining 1 and 2
If you are new to both Emacs and Vim, I strongly recommend using Spacemacs in Evil (Vim) mode. Emacs/Hybrid mode needs to be considered only if you are already comfortable with the Emacs keybindings.
Vim or its modal editing lets you edit text at the Speed of Thought. Many people argue that coding is more about thinking and less about churning out code, and that the minimal time savings you get from optimizations such as Vim is negligible. I disagree:
Vim helps you make editing as least intrusive to your thinking as possible
The small savings add up across your profession as a coder.
As a programmer, having a hackable/fully-programmable editor gives you the flexibility to program your editor as per your needs.
Sylvain Benner, the creator of Spacemacs has written an excellent short piece on ‘Modal editing – why and how‘ that I highly recommend. In fact, I recommend going through that whole article (Beginner tutorial) if you are new to Spacemacs.
Vim itself has a steep initial learning curve. But it pays off greatly after the first few weeks if you write code as a profession. Some helpful/interesting Vim resources to get you started:
There are lot of good video tutorials on Vim in Youtube too if that’s what you prefer. Here is one I liked – Learning Vim in a week (link).
In short, Vim lets you edit text similar to how you think about editing text. Examples:
Typing ciw lets you change the current word (change in word)
Typing di” lets you delete everything within two double-quotes (delete in “)
Typing dt> lets you delete everything till > (delete till >)
You get the idea. These are executed in the normal mode, where you will spend most time. There is much to be said about the composability of vim commands too (some other time).
Pro tip: It is very highly recommended that you remap your Control (Ctrl) key to the Capslock key once you start using Vim. You won’t need to move your fingers from home row much with this change. For Mac, I used Karabiner to remap the keys.
At the same time, Vim is said to suffer from a less than ideal scripting language and window management capabilities. Spacemacs gives you the best of both worlds.
Two primary reasons which I went with Spacemacs were:
This was the biggest factor for me. I’ve tried to transition into using Vim in the past. To get to a really productive dev environment, you need to spend significant time going through various forums, identifying the plugins you need, installing them using some package manager like Vundle etc. This can be quite daunting for a newcomer.
Spacemacs has this concept of layers. For example, each language has its own layer. When you install a layer, Spacemacs installs the best community-curated plugins for that language for you. Even better, it auto-detects which layer to install when you open a new type of file and asks you whether you want to install it. A simple ‘y’ and you’re set to go!
The layers that I have installed as of April 2018:
Another issue in that I had faced in the past was learning and memorizing all the keyboard shortcuts. Vim/Emacs has evolved over more than 20 years now. Different plugins were created at different points of time. Hence there is no consistency or logical grouping for these.
Spacemacs solves that by taking all the plugins/shortcuts and logically grouping them by mnemonics. Spacebar is the leader key (main key for issuing any command). That itself is an improvement over other editors which strains your little fingers and can result in carpal tunnel syndrome (CTS).
To give some example of how mnemonics work:
SPC w gives you all the window related commands.
SPC w d deletes a window
SPC w m maximizes a window
SPC b gives you all buffer related commands.
SPC g gives you all git related commands.
Spacemacs comes with the which-key plugin which shows a helpful menu whenever you press SPC so that you don’t need to memorize all the shortcuts.
If you are interested in learning more about Spacemacs, I highly recommend the Spacemacs ABC tutorial series by Eivind Fonn – Youtube playlist link. It is a series of 8 videos, in each of which Eivind explores one ore more keys under SPC in detail.
Some cool stuff
These are some of the cool features that made the whole experience more fun.
Magit is a plugin for git version control. Till now, I generally used to prefer command line git for all version control use-cases. But the 3 window Ediff merging capability of Magit is something that sold me completely. Resolving merge conflicts were never so much fun!
Shortcut: SPC a u
With most modern editors, we are used to a linear timeline of changes. If you go back to your past change and modify something, you effectively lose history of all the things you did after that change. With the undo tree, your entire history is preserved and you can easily navigate to any point in time including any branching in history you did.
You can record and replay commands using Macros. A good introduction here – link. This can be pretty powerful if you need to do something repetitive in a smart way.
Shortcut to record: q<letter><commands>q Shortcut to replay: <number>@<letter>
You might think these are pretty esoteric tools and of not much practical use. I too though so until I came across multiple situations where I was able to use them effectively.
Once you get to using Vim, you will resent having to take your hands off the home row (where hjkl keys are situated) at any time. Moving your hands to the trackpad could feel like too much work.
Worry not, Vimium comes to the rescue – https://vimium.github.io/.
Its a plugin for Chrome (you have Vimari for Safari, not as feature-rich as Vimium) that lets you navigate the web using your keyboard, mostly with keys in the home-row.
Even though I’ve talked so much about Spacemacs, I have realized that it might just not be the right tool for some cases. In particular, for me, it was not working for Java development.
I tried all the options available in Spacemacs. Eclim, which comes with the Java layer, essentially runs a headless eclipse process in the backend. However, it is pretty slow. Other options like malabar, jdee or meghanada were either outdated, not being actively developed or not working. It’s difficult to write Java without proper auto-complete or go-to-definition etc. features.
I found that IntelliJ team offers an IdeaVim plugin that lets you use IntelliJ with Vim keybindings. This along with the distraction free mode (Cmd + I) gave me a near similar experience to Spacemacs, but with all the Java goodness.
I am still learning a lot in both Vim and Spacemacs. I hope to update this post with more interesting/useful features that I uncover over time.