I enrolled for the Spring’16 batch of OMSCS program offered by Georgia Tech university and Udacity. As my first course as part of the program, I chose Knowledge base AI, taught by Prof. Ashok Goel and David Joyner. The video sessions of the course can be freely accessed through Udacity website here.
The class was very interesting and insightful. I thoroughly enjoyed the 3 main projects we had to do throughout the class. The overall class was focused on systematically studying human-level intelligence/cognition and seeing how we can build that using technology. As a measure of accessing human cognition, the class used Raven’s Progressive Matrices (RPM).
Our class came into the internet limelight after the course when Prof Ashok revealed that one of out TA – Jill Watson was actually a bot. Covered widely by press – Washington Post and WSJ.
With regard to the course content, we were introduced to the broad areas that come under human level AI research such as:
- Semantic Networks, Frames and Scripts – useful knowledge representations
- Generate & Test and Means-End Analysis – two popular problem solving techniques
- Production systems – rule based systems which are useful in AI
- Learning By Recording Cases and Case base reasoning – techniques to learn based on past examples and to adapt them as per requirement
- Incremental concept learning – contrary to approaches like ML where we feed millions of examples to train models, human cognition deals with incremental data inputs. Incremental concept learning describes how we can use generalizations and specializations to make inferences from these inputs.
- Classification – mapping percepts to concepts so that we can take actions
- Formal Logic – techniques from predicate calculus such as resolution theorem proving are useful in some cases of reasoning. However human cognition is inductive and abductive in nature, whereas logic is deductive.
- Understanding – Humans always deal with ambiguity. Eg: Same word can have different contextual meanings. Understanding is how we leveraging available constraints to resolve ambiguities.
- Common sense reasoning – about modeling our world in terms of a set of primitive actions and their combination.
- Explanation based learning and Analogical reasoning – deals with how we can extract ‘abstractions’ from prior knowledge and transfer these to new situations
- Diagnosis and learning by correcting mistakes – Determining what is wrong with a malfunctioning device/system. Learning by correcting these mistakes.
- Meta-reasoning – reasoning about our own reasoning process and applying all the above techniques to the same.
As part of the projects, we built small ‘AI agents’ that process RPM problem images and use various techniques to solve them. These were quite challenging and required fair amount of programming. For me, it was a good learning opportunity for building something non-trivial in Python. We used Python image processing library PIL and various image processing techniques like connected component labelling.
Considering the vastness of the topic – human level intelligence and that I intent to specialize in ‘Interactive Intelligence‘, I found the course really interesting and informative. Enjoyed it!
I’m adding a nice poster created by my classmate Eric on the high level topics we studied in this class.