2014-01-17

spf13-vim, a completely cross platform distribution of vim plugins and resources for Vim, GVim and MacVim stays true to it’s vim roots while adding modern features including a plugin management system, a curated plugin set with customized configuration, advanced autocomplete, tags, support for dozens of languages and much more.

I recently read a thread where the author asked for feedback on whether or not to use spf13-vim. Responses varied greatly with some people loving it to others claiming it was bloated and overkill. Some suggested everyone should create their own configuration from scratch. Not surprisingly many of these criticisms were accompanied by links to people’s own vim configurations. With so many options out there, why would anyone use spf13-vim. While I can’t speak for anyone else, here are four reasons why I use spf13-vim.

First, a bit of history… spf13-vim started as my personal vim configuration. As long as I can remember I’ve obsessed with user experience and spent an embarrassingly large amount of time customizing each and every single action. As many others have, I put my configuration on github, not with the intent to share it with others, but to have a safe place to keep it for when I setup new computers.

1. Vanilla Vim like

In designing spf13-vim I took extreme caution with every decision I made and made certain to not override any essential Vim functionality . I had invested years in becoming proficient in vanilla Vim and didn’t want to throw any of those muscle memories away. I also wanted to remain completely comfortable in vanilla Vim. At the same time I wanted to smooth over some of the rougher parts and provide additive features. Virtually every one of the vanilla keystrokes and actions remain untouched.

A handful of default keystrokes have been remapped. A few of what I consider some of the less useful behaviors have been adjusted to be more consistent with the overall vim experience. For example, the first thing people learn in Vim is how uses ‘hjkl’ for cursor movement, spf13-vim adds ctrl+hjkl to move around windows and ‘HL’ (shift+hl) to move between tabs. Since these are common actions, it felt decidedly unvim-like to hide them behind commands or multiple keystrokes. These tab movements override Vim’s default behavior of moving to the top ‘H’ and bottom ‘L’ of the window. Something I don’t ever use.

2. Customizability

Continuing with the history, over time people found it and began to use it for themselves. Recognizing this I adding more flexibility to spf13-vim which provided the ability for people to use spf13-vim as a foundation, but add their own customizations to configure it precisely to fit their own needs.

Perhaps you work differently than I do and use the default ‘H’ & ‘L’ functionality. spf13-vim wraps each one of the overrides with an conditional statement enabling users to easily customize it exactly to their needs. Most users find spf13-vim default requires little customization but are happy to discover how easy it is to craft their personalized vim experience.

3. Power of the masses

The strength and heart of open source comes from people recognizing together we can make something better than any of us could alone. As more and more people began to use spf13-vim, many of them desired to contribute back. While spf13-vim started as my personal project, it has grown from mine to ours. I use vim to write in a few languages, and invested time discovering and customizing the best plugins for those languages, collectively the users of spf13-vim support far more languages than any one person would be able to. I love when I edit a file in a language I haven’t used before and someone else has already crafted a customized experience in spf13-vim.

spf13-vim benefits greatly from contributions from it’s completely diverse user base. This ensures regardless of your development stack or purpose, spf13-vim likely meets your needs. With support for many different languages, plugins and uses, vim could become weighted down. spf13-vim makes it trivial to include only the features you would use by defining a simple list.

One of the primary reasons I hear for people abandoning Vim is properly configuring vim is too difficult and plugins tend to be incompatible with each other. With many active and engaged users working together issues and incompatibilities are discovered and fixed by the users together quickly.

The vim plugin community is always evolving. New plugins come out daily. With many different users exploring and experimenting with new plugins this results in vim configuration using the latest and greatest. Without investing countless hours exploring, each user benefits greatly from the combined efforts of everyone. spf13-vim receives many pull requests each week keeping our collective vim experience fresh.

4. The community

The primary reason I love using spf13-vim is the great community. This goes beyond the power of the masses, the spf13-vim users are some of the most patient and kind people I’ve ever encountered. The spf13-vim mailing list is full of people, sometimes naive, asking for help. I am consistently impressed with the willingness of spf13-vim users to help.

I remember back to when I first learned Vim. It was overwhelming at times and frustrating. After 8 years using Vim full time I still feel this way from time to time. Anyone who has tried to use Vim can likely relate. How wonderful it is to have a group of helpful users available and willing to assist.

When I speak at conferences I’m most often recognized for vim configurations. People come up and tell me how happy they are to “use me”. How lucky I am to be part of this great project that bears my name. A big thanks to the talented users (and contributors) of the greatest vim experience.

That’s why I use spf13-vim. Why do you?

2013-11-07

While developing Hugo I became disappointed with the interface limitations flags alone provide. A quick look at virtually any command line application (ls, grep, less, etc) reveals that most applications overuse flags to do everything and often allow conflicting flags to be applied.

Even though hugo is relatively simple, we already had the ability to stack flags that didn’t make sense. You can set the port using –port but this only has an effect if you also specified –server. Clearly another mechanism is needed.

Modern applications like git, brew and go use sub commands to control actions that the application would perform. Each action in turn has a set of flags associated with it. It’s a great pairing that works well.

I quick something search for existing libraries turned up a few. Mostly based off of the code written for the go tool, which wasn’t intended to be a general purpose library, but people began using it nonetheless.

I was disappointed to discover than none met my needs. Unwilling to be deterred, I decided to build Cobra, a commander for modern CLI go applications.

Nested Sub Commands

A core requirement for Hugo commands was the ability to nest subcommands. A planned feature of Hugo is the ability to install themes, shortcodes and more from a central repository. While a possible interface could look like ‘hugo get –type=shortcode –name’, I prefer the interface ‘hugo get shortcode NAME’.

This preferred interface could be accomplished while using existing libraries, it would be through manual parsing of strings following the command. This made no sense to me. If we already have a mechanism to parse strings on the command line, why not use what we already have.

Cobra supports as subcommands as many levels deep as you want to nest them.

Commands all the way

Existing libraries all define a top level controller object which has builders for each of the commands (of which they only support one level). To me this seemed unnecessary and definitively un-go-like.

In Cobra, everything is a command or a flag. The top level of the application is a command which optional can run an action and optionally have subcommands (at least one of these two is required). Each of the children commands then can in turn have an action and children and so forth.

Additionally creating a command is as simple as creating a struct. No inflexible NewCommand methods needed.

POSIX/GNU Flags

The go standard flag library is great. I really appreciate it’s simplicity and power. For unknown reasons the authors decided to throw out decades of flag standards and define their own. I’m not a huge fan of the GNU/POSIX style flags, but they work well enough and it’s what most people expect. If there’s ever a place to do things in a new way, this doesn’t seem like it. Cobra has full support for POSIX flag functionality provided by the pflag libary, a fork of the flag standard library which maintains the same interface while adding POSIX compliance.

Cobra has tight integration with this flag library. Flags can be defined globally, for a sub tree or for a specific command.

Extensibility

In each of the existing libraries they had code that had conditional logic to check if the first string was ‘help’, then they ran a different code path. This hard coded logic meant that if you used these libraries you either had to fork them or were stuck with their help routine. Additionally I have a philosophical issue with using conditional logic to handle a specific command. If you are building a commander to handle arbitrary commands, why not use that mechanism to define help.

Cobra’s help functionality is a command and provides a high degree of customization without forking.

Conclusion

Find Examples, Documentation & Cobra at http://github.com/spf13/cobra

Contributions and feedback welcome.

2013-10-07

I’ve recently been getting into go. I’ve built a few packages and libraries.

For this post, let’s explore the ‘gofmt’ or ‘go fmt’ tool further.

Go ships with a basic set of tools common to most languages and development environments. Like most things with go, the tools are simple in design, but powerful in function.

  • go build – compile the code
  • go install – install (and build) a package
  • go get – download and install packages including dependencies
  • go test – run test suites and benchmarks
  • go doc – generate and view documentation
  • go fmt – format and refactor your code
  • go run – build and run an app

First thing to note, ‘go fmt’ is just an alias for ‘gofmt -l -w’. When following along, please take note as the two accept different parameters.

Using gofmt to format code

We’ve had decades of endless argument and debate about the correct format of software. Each language has it’s own different idioms and changes and many have multiple differing standards. Golang has done away with these endless debates once and for all by shipping a formatter that ensures that all go code follows the exact same format.

Go fmt isn’t without it’s detractors. People complain that go fmt isn’t customizable and that it puts braces where they don’t want them. A standard isn’t customizable, and it’s this thinking that has caused so much controversy in every other language. Go fmt is great. Not only does it know the correct format, but it understands go code. It properly lines up definitions, properly wraps if statements when they grow to long and ensures that your code completely conforms to the standard.

The best part of this is that all my code complies with the format standard perfectly and I’ve never read the go format policy. I just write code and run go fmt on that code and immediately my code conforms.

Preview which changes gofmt will make

The following command will show you all the changes gofmt will make in diff format.

$ gofmt -d path-to-code

Given a file, it operates on that file; given a directory, it operates on all .go files in that directory, recursively.

Format your code

The following command will make changes directly to your source files.

$ gofmt path-to-code
or
$go fmt -l -w -s path-to-code

Here is an example of the kind of changes gofmt would make

 func (f *Filesystem) Files() []*File {
-   if len(f.files)<1 {f.captureFiles()}
+   if len(f.files) < 1 {
+       f.captureFiles()
+   }
    return f.files
 }

Given a file, it operates on that file; given a directory, it operates on all .go files in that directory, recursively.

Simplify mode

‘gofmt’ can also simplify code where appropriate with the additional -s flag. I have found that simplify is safe to use and only will modify code when it’s obvious and clear.

Here is an example of the kind of changes simplify would make.

-           final = append(final, first[start:len(first)]...)
+           final = append(final, first[start:]...)

Using gofmt to refactor code

gofmt is much more powerful than simply eliminating arguments about code formatting. It can also restructure your code. Unlike using traditional unix tools like awk and grep, gofmt understands go code and can be used to restructure your code easily.

gofmt uses patterns to identify changes to make to your code. Patterns are established in the first half the expression followed by a ‘->’ then used by the second half of the expression.

Use the flag -d instead of -w to check what gofmt will do prior to running it.

Examples

To check files for unnecessary parentheses (example from the docs):

gofmt -r '(a) -> a' -l *.go

diff hugolib/summary.go gofmt/hugolib/summary.go
    for i, line := range rstLines {
        if strings.HasPrefix(line, "<body>") {
-           rstLines = (rstLines[i+1 : len(rstLines)-3])
+           rstLines = rstLines[i+1 : len(rstLines)-3]
        }
    }

diff parser/parse_frontmatter_test.go gofmt/parser/parse_frontmatter_test.go
-       if (err == nil) != test.errIsNil {
+       if err == nil != test.errIsNil {

Rename a field in a struct. Notice how it not only changes the definition, but every place that the value is set or referenced.

gofmt -r 'a.Info -> a.Information' -d ./

diff hugolib/site.go gofmt/hugolib/site.go

 func (s *Site) initializeSiteInfo() {
-   s.Info = SiteInfo{
+   s.Information = SiteInfo{

-   page.Site = s.Info
+   page.Site = s.Information

-   s.Info.Indexes = s.Indexes.BuildOrderedIndexList()
+   s.Information.Indexes = s.Indexes.BuildOrderedIndexList()

-   s.Info.LastChange = s.Pages[0].Date
+   s.Information.LastChange = s.Pages[0].Date

    for _, p := range s.Pages {
-       p.Site = s.Info
+       p.Site = s.Information
    }

-           n.Data["OrderedIndex"] = s.Info.Indexes[plural]
+           n.Data["OrderedIndex"] = s.Information.Indexes[plural]

    return &Node{
        Data: make(map[string]interface{}),
-       Site: s.Info,
+       Site: s.Information,
    }
2013-06-17
spf13 responsive website

After after a few months of work I’m happy to display the newest incarnation of spf13.com. The past few years this blog has powered by wordpress and drupal prior to that. Both are are fine pieces of software, but over time I became increasingly disappointed with how they are both optimized for writing content even though significantly most common usage is reading content. Due to the need to load the PHP interpreter on each request it could never be considered fast and consumed a lot of memory on my VPS. I have been intrigued by the recent trend of static site generation, and determined that the next version of my blog should be generated by one. I also began looking into the resolutions and devices used by visitors to my blog and they varied greatly. It didn’t make sense to cater the experience to the common one, largely because there wasn’t one. After 5 years and a two engines this site accumulated a lot of cruft. Poorly rendered html from wordpress wysiwyg. Incorrectly automatically cropped thumbnails. I took this opportunity to clean it all and focus on the content.

Static Site Generator

Determined to use a static site generator I reviewed the commonly used ones. While I didn’t do a thorough investigation, I discovered that all were relatively slow taking minutes to render this blog. Additionally calling it a blog is overly simplifying things, there’s a lot of different content types on this website. Since I was already looking for a good project to write in golang I decided to build my own static site generator called hugo. I also determined to optimize the website for speed, performance and presentation. Like many of the other static site generators it takes a similar approach of using markdown with front matter with the meta data and generates html files for apache, nginx or another web server to serve up. I’m preparing Hugo for it’s initial release and will blog more about it then.

Responsive Design

The most noticeable part of the new design is the responsive layout. Go ahead, give it a try: resize the window and watch as the design adjust to fill the narrowest or largest of screens. Many people using responsive design optimize for smaller screens and ignore higher resolution monitors. Since my readership comprises a wide variety of screens and devices including hi res monitors. This design is optimized for even the highest of resolutions to both place the focus on the content as well as utilize the available space.

Retina Friendly

10gen, my employer graciously provided me with an excellent Retina display mac book. Ever since, it’s been a bit disappointing viewing some websites which suddenly looked unimpressive on this new display. I wanted every image and shape to look crisp and clean on any display. The optimal way to do this is through heavy use of vector based icon fonts which cleanly scale to any size. The logo and all glyphs on this site are rendered using a single icon font. All photographs have been optimized for retina displays and compressed using JPEGmini. It looks great on a retina display.

Fast browsing

In the previous incarnation of this site which ran wordpress, the site performance was adequate. According to pingdom it was faster that 86% of all tested websites, but I was never satisfied. The total page size was 1.5 MB and an average page contained over 82 requests.

Pingdom of former spf13.com

Pingdom of former spf13.com

Now, the number of requests for a typical page is around 10. Including network transfer time (the majority of load time) the load is around a quarter of a second on a cold load. Given modern browsers and caching subsequent results will be around 100 ms. The page size is under 150kb and is now faster than 99% of all tested websites.

Pingdom of current spf13.com

Pingdom of current spf13.com

Easy Navigation

Lastly navigation has been greatly simplified. Since most visitors enter through a post rather than the homepage, I put the focus on the content itself. Other than the top level navigation which brings you to the different content types, the rest of the content is accessed through similar content. If you are viewing a presentation or post on MongoDB, you are likely interested in MongoDB and would be interested in similar content which is available via tags and topic links available on the page.

Thank you

I present this new blog to you and hope you find it valuable. Over the next few weeks I’ll be following up this post with a series of detailed how-tos based on the experience of building this site.

Thank you for visiting, and tell me what you think in the comments.

2012-11-16

Two times a year the drivers team at 10gen gathers together for a face to face meeting to spend time together working on issues and setting forth our goals for the upcoming six months. In September 2012 we all converged on New York City for the second ever driver days. This time we split up into teams for a hack-a-thon. As maintainers of drivers & integrations in over a dozen different languages while we are on the same team, it isn’t often that we actually work together on the same codebase. The hack-a-thon gave us a chance to do just that. We split up into 5 teams each having members from different languages. Without further ado, here is what we came up with.

Disclaimer.. Each project currently represents exactly one evenings worth of work. Our intent is to pick the best project or two, polish them up and move them to the 10gen Labs account on github.

As with all things open source, contributions welcome.

Mongo Contributor Hub

Ever wonder what type of open source MongoDB related projects are being developed these days? We did. So we hacked together a quick Github search & explore interface for any project Github reports as associated to MongoDB! Projects are organized by language, fully searchable and sorted by followers and forks. Built with Nodejs, Express and MongoDB.

https://github.com/TylerBrock/mongo-contributor-hub

Try Aggro

With thirteen challenging questions you’ll learn the ins and outs of aggregation with MongoDB. Will you be able to complete all the challenges and become an aggregation master?

https://github.com/rozza/try-aggro

MongoDB.OData

OData is a highly used protocol with clients in .NET, Java, jquery, and many more. It makes sense to be able to support these clients with a MongoDB backend. With OData v3, the protocol is now rich enough to support the rich document model MongoDB already provides. MongoDB.OData let’s you expose your entites (MongoDB documents), complex types (MongoDB embedded documents), and collections (MongoDB arrays) via OData and includes full support for queries and OData service operations. Support for updating is almost ready.

https://github.com/craiggwilson/mongo-dotnet-odata

Slow Query Profiler

Logging slow queries is essential for any database application, and MongoDB makes doing so relatively painless with its database profiler. Unfortunately, making sense of the system.profile collection and tying its contents back to your application requires a bit more effort. The heart of mongoqp Mongo Query Profiler is a bit of map/reduce JS that aggregates those queries by their BSON skeleton (i.e. keys preserved, but values removed). With queries reduced to their bare structure, any of their statistics can be aggregated, such as average query time, index scans, counts, etc.

As big fans of Genghis, a single-file MongoDB admin app, the initial intent was to contribute a new UI with the profiler results, but one night was not enough time to wrap our heads around Backbone.js and develop the query aggregation. Instead, we whipped up a quick frontend using the Silex PHP micro-framework. With the hack day deadline no longer looming, there should be plenty of time to get this functionality ported over to Genghis. Additionally, the map/reduce JS may also show up in Tyler Brock’s mongo-hacker shell enhancement package.

While presenting mongoqp to our co-workers, we learned about Dan Crosta’s professor, which already provides many of the features we hoped to implement, such as incremental data collection. We think there is still a benefit to developing the JS innards of mongoqp and getting its functionality ported over to other projects, but I would definitely encourage you to check out professor if you’d like a stand-alone query profile viewer.

https://github.com/jmikola/mongoqp

Aggregation Pipeline Web Interface

We built a web app for the new aggregation framework. It allows you to create pipelines using a web interface, making it easy for a user to play around with the new framework without having to use the command syntax. Users can incrementally add pipeline operators to test running aggregations with different operators, and can use the easy interface as an educational tool to learn how the pipelines work. The app also allows you to pipe the results of aggregation framework jobs straight to user-defined output collections and see a history of past jobs along with their run-time. The app is built in Ruby on Rails, using the MongoMapper ODM.

https://github.com/estolfo/aggre-great

2012-10-04
fire

There are a variety of reasons businesses either do not have a disaster recovery plan or their current plan is substandard.  The beliefs of the people in charge of developing these processes (business owners and IT department) play a significant role in how effective the overall strategy will be.  This is problematic when the decision-makers have bought into one or more of the common myths surrounding disaster recovery.

Myth – Disaster Recovery is Expensive and Resource Intensive

One of the biggest reasons businesses put off developing a disaster recovery strategy is because they believe it will become too expensive and resource intensive.  As a result, they view it as more of a luxury than a necessity.  The truth is as technology continues to evolve; the costs associated with disaster recovery continue to fall.  Virtualization, standardization, and automation have all played key roles in making disaster recovery more affordable.  They have reduced the number of people required to restore systems which significantly decreases personnel costs.  In fact, a streamlined disaster recovery strategy can require only one person.  Virtualization also reduces the initial capital investment because redundant physical infrastructure is no longer necessary.

Myth – After Planning, There is No Way to Accurately Run a Test

Many businesses suffer from an ineffective disaster recovery plan and don’t even realize it.  This is because they do not believe a full-scale test is possible without significantly disrupting day-to-day activities.  Virtualization allows recovery plans to be tested against significant failures multiple times to ensure the plan is consistently effective, without disrupting day-to-day activities.  Not only are tests easy to run, but virtualization also eliminates the need to transport the IT team to multiple locations.

Myth – Creating and Managing a Disaster Recovery Solution Requires “Special” Skills

A common hesitation, especially among small businesses, is the belief that developing a disaster recovery solution requires a special set of skills.  This myth largely stems from traditional manual data recovery because end-users had to deal with duplicate, silent infrastructures across multiple sites.  Virtualization standardizes the disaster recovery process by encapsulating operating systems, applications, and servers.  This includes all the configuration data.  As a result, the entire process is now much less complex than it used to be.  Additionally, once the solution is in place it can be reliably tested and executed by staff.  Since this is all taken care of via automated processes, no special skills are required.

Myth – Disaster Recovery Plans Are like an Insurance Policy that Never Gets Used

The most common myths which prevent businesses from taking disaster recovery seriously are that it is unnecessary.  They believe disaster recovery is a sunk cost similar to a car insurance policy of someone who never gets an accident.  Even if a disaster never happens, the recovery plan still provides a variety of benefits to the business.  One of the most common uses is as a migration plan template anytime a business switches data centers.  Depending upon the industry, disaster recovery plans may also be a compliance requirement. In order to accurately assess the need for a new or improved disaster recovery plan these myths must be eliminated from a business’s belief structure.  As long as these myths remain, the odds of successfully implementing an efficient disaster recovery strategy are slim. Tony Spinks believes every company should consider using Disaster recovery services.  These services are critical for keeping businesses running after system failures.  Data Centers offering disaster recovery services provide the redundancy.

2012-09-24
security

The protection of your intellectual property, or any information that you post to your blog, forum or website, can be a tricky subject. While there are those unscrupulous few out there in the cyber world who will purposely copy your words as their own, there are more people that are simply ignorant of the laws, unaware that they are stealing when they copy and reuse your musings. According to a criminal lawyer at an Orlando based firm that we spoke to, there are steps that you can take to make sure that your intellectual property is protected. Here’s what you can do:

1. Obfuscate Your Code

To protect yourself against thieves, use software like Dotfuscator and JavaScript Obfuscator. These types of software programs will obfuscate your coding, making your content more difficult to steal by making a program’s source code difficult to read by humans. When someone tries to copy and paste your blog entry, they’ll be left with unreadable, useless coding. Other helpful software includes vGuard, Jasob and Salamander .NET Obfuscator.

2. Post a Permissions Policy

A page of your website or blog should be dedicated to an explicit permissions policy. This policy will tell others what they can and can’t do with the content that you have posted. The key to a great permissions policy is clarity; people should know exactly what it is that they can do without your permission and when they’ll need to contact you. Once you’ve published your policy, you’ll have something to point to if someone violates it.

3. Contact the Violator

If you find that your content has been used by someone else, the first thing to do is to send a polite email to the violator. 99 times out of 100, the person who used your content has no idea that they’ve done anything wrong. Kindly thank them for finding your content interesting enough to use on their own site, point to your permissions policy, and offer suggestions as to how the person can use your content legally.

4. Demand Removal

If you’ve found the one person with whom a polite email doesn’t work, you’ll need to escalate your efforts. Sit down and draft an email demanding that the offender remove your content from their site immediately. While it’s still suggested that you remain tactful and polite, you may want to be a bit more forceful. Direct the offender to your permissions policy again, letting them know that they are violating copyright laws by allowing your content to remain public on their page.

5. Contact the Hosting Site

If steps two and three haven’t worked with the violator, you’ll need to contact their hosting site. Find a site online that will allow you to search for the person’s domain registration and, once you do, send an email to the hosting service. Let the hosting service know the steps you have taken and that you are now requesting removal of the web site. The powers-that-be at the hosting site will investigate your claim and, if it is found to be with merit, will take the site down.

6. Get Legal Help

At the end of the day, your last bit of recourse may be found in hiring an attorney. A criminal defense lawyer can go to bat for you, helping you to have the offending content removed from cyberspace.

If you find that someone has copied your content for their own blog, be flattered; it means that they’ve found your content useful or relevant. Remember that most people don’t understand that they are violating any laws, and an email is often enough to get your content removed from the offending website. If you follow the steps above, you’ll ensure that your intellectual property remains yours.

Katie Hewatt is a legal researcher and contributing author for the Florida Law Firm of Katz & Phillips, which deals with Internet crime cases. The Orlando law firm keeps up to date with the latest technology online and the ever changing cyber crimes involved with it.

2012-09-17

Steve Francia at OSCON

At OSCON 2012 in Portland I gave a presentation on building your first MongoDB application. Over 150 people were in the audience, a pretty significant number of this type of hands on tutorial. Certainly worth the weeks of preparation that went into developing it. While at OSCON I put the slides online at SlideShare where during the four day conference the amassed over 20k views and within a couple weeks over 30k views. Within a month it had been viewed more than ten times the total attendees at OSCON, one of the largest technical conferences in the world.

How was this presentation so successful that it amassed more views in it’s first week than any other presentation had accumulated over 12 years? Here are the four critical things I did to generate such an amazingly high number of views.

Have an interesting & unique topic

The first two items are focused on the preparation. Your topic needs to be something that people are both interested in and find interesting. To be both it should be unique about something popular. I’m really passionate about building great things and so are a lot of people. Like many people out there I’ve looked into using new technologies to make the process of building applications easier and more enjoyable. This behavior led me to explore NoSQL solutions which led me to MongoDB. MongoDB is one of the most popular technologies right now. It’s claiming a significant lead over other NoSQL technologies and catching quickly to the leaders in the very established relational space. Remember I’m commenting here on popularity and interest. This is easily discoverable by looking at things like google trends or frequency of mentions on niche sites like hacker news. With popularity comes noise. To stand out from the noise you need a unique topic or take on it. In giving this presentation I chose to focus on ‘writing your first ruby application backed by MongoDB’. From skeleton to deployment. To my knowledge this hadn’t been done before and was something that stood out to people.

Make it broad and approachable

To reach the largest group of people you need to appeal to a large group of people. This means giving a presentation that is approachable that many different kinds of interested people could follow along and find value in. Given that this presentation was aimed at first time MongoDB users without much development experience it really resonated with a lot of people. One nice thing about OSCON is that it attracts a really diverse crowd. If  it was appropriate for the audience there, it would work well for the broader Internet audience which OSCON is a microcosm of.

Cater to the online audience

Whenever I create my slide decks I’m always focused on two audiences. The live audience that will see the presentation projected onto a screen and the remote audience that will be viewing it on slideshare via their computers/laptops/tablets etc. Most people ignore that second group, but a good presentation will always reach more people online than in person. If anything it’s the group that can’t afford to be ignored. I ensure that the slides flow well even without my narration. I give them to a few friends without the accompanying talking and see how well they can follow them. I make sure to use good layouts that use the space well and are readable when on the projector from the back of the room as well as when embedded on a web page. It’s important when using slideshare and the other sites that you review the presentation on them as well as embedded as conversion is sometimes destructive, especially if you are using a unsupported font.

Give it a good start

Success begets success. During the live presentation I put the slides up on slideshare so that the attendees could follow along at their own pace. Since the audience was largely working with the code in the presentation it made a lot of sense for them to bring it up on their laptops. With the 150+ audience members all viewing the presentation that gave it a very big initial push. A few of them also tweeted it which helped even more people to view and follow it. Slideshare has a ranking algorithm that tracks when a lot of people view a presentation in a short amount of time and flag it as a “hot” presentation. Within a couple of hours it had 600+ views and a couple dozen tweets on twitter.

That evening I got an email from slide share that my presentation was getting a lot of views. I got a second email that it was experiencing a high volume of tweets as well. It continued to climb up the ranks. The next morning it was selected as the SlideShare presentation of the day.

@slideshare: '#OSCON 2012 MongoDB Tutorial' by @spf13 is a #SlideShare presentation of the day http://t.co/74DqqsJR cc @oscon on Twitter http://twitter.com/slideshare/status/225393208581038080"

On Tuesday it had amassed an outstanding 7k+ views

From there it became a presentation of the week. It was featured prominently on the home page where it remained for the next 3 days at which point over 22k people had viewed it.

Once it fell off the home page and popular/week lists and tweets became less regular the number of views were still significant, but had slowed to a few hundred a day. At the time of this writing it’s over 30k views and climbing.

See also my post on “How to deliver a great conference tutorial” where I walk though how to prepare and deliver an effective conference tutorial.