As a newcomer to gRPC (in Go) there are many resources setting out what gRPC is, where it originated from and how to create a basic service and client. After completing an introduction to gRPC and setting up a basic implementation I felt a bit lost as to where I need to go next.

gRPC consists of more than just sending binary blobs over HTTP/2. gRPC is also a set of libraries that will provide higher-level features consistently across platforms that other libraries typically do not. The purpose of this blog is to be a guideline for where to find the resources and leverage these libraries and features to make the most of the gRPC ecosystem after you understand the basics of gRPC.

Note:To minimise bloat this article assumes knowledge of gRPC and Protocol Buffers.

Quick recap

Before we begin, let’s refresh our memory and do a quick recap of what a gRPC service and client look like and how it’s defined in a protobuf definition file (.proto).

We will create a service to query Go releases with 2 methods, GetReleaseInfo and ListReleases.

service GoReleaseService {
    rpc GetReleaseInfo(GetReleaseInfoRequest) returns (ReleaseInfo) {}
    rpc ListReleases(ListReleasesRequest) returns (ListReleasesResponse) {}

message GetReleaseInfoRequest {
    string version = 1;

message ListReleasesRequest {} //empty

message ListReleasesResponse {
    repeated ReleaseInfo releases = 1;

message ReleaseInfo {
    string version = 1;
    string release_date = 2;
    string release_notes_url = 3;

Compiling this with the protoc tool with the grpc plugin will create generated Go code to marshal/unmarshal the messages (i.e GetReleaseInfoRequest) between Go code and the protocol buffer binary messages. The gRPC plugin will also generate code to register and implement service interface handlers as well as code to create a gRPC client to connect to the service and send messages.

Let’s look at a basic service and client implementation.


type releaseInfo struct {
    ReleaseDate     string `json:"release_date"`
    ReleaseNotesURL string `json:"release_notes_url"`

/* goReleaseService implements GoReleaseServiceServer as defined in the generated code:
// Server API for GoReleaseService service
type GoReleaseServiceServer interface {
    GetReleaseInfo(context.Context, *GetReleaseInfoRequest) (*ReleaseInfo, error)
    ListReleases(context.Context, *ListReleasesRequest) (*ListReleasesResponse, error)
type goReleaseService struct {
    releases map[string]releaseInfo

func (g *goReleaseService) GetReleaseInfo(ctx context.Context,
                        r *pb.GetReleaseInfoRequest) (*pb.ReleaseInfo, error) {

    // lookup release info for version supplied in request
    ri, ok := g.releases[r.GetVersion()]
    if !ok {
        return nil, status.Errorf(codes.NotFound, 
            "release verions %s not found", r.GetVersion())

    // success
    return &pb.ReleaseInfo{
        Version:         r.GetVersion(),
        ReleaseDate:     ri.ReleaseDate,
        ReleaseNotesUrl: ri.ReleaseNotesURL,
    }, nil

func (g *goReleaseService) ListReleases(ctx context.Context, r *pb.ListReleasesRequest) (*pb.ListReleasesResponse, error) {
    var releases []*pb.ReleaseInfo

    // build slice with all the available releases
    for k, v := range g.releases {
        ri := &pb.ReleaseInfo{
            Version:         k,
            ReleaseDate:     v.ReleaseDate,
            ReleaseNotesUrl: v.ReleaseNotesURL,

        releases = append(releases, ri)

    return &pb.ListReleasesResponse{
        Releases: releases,
    }, nil

func main() {
    // code redacted

    lis, err := net.Listen("tcp", *listenPort)
    if err != nil {
        log.Fatalf("failed to listen: %v", err)
    log.Println("Listening on ", *listenPort)
    server := grpc.NewServer()

    pb.RegisterGoReleaseServiceServer(server, svc)

    if err := server.Serve(lis); err != nil {
        log.Fatalf("failed to serve: %v", err)


func main() {

    conn, err := grpc.Dial(*target, grpc.WithInsecure())
    if err != nil {
        log.Fatalf("grpc.Dial err: %v", err)

    client := pb.NewGoReleaseServiceClient(conn)

    ctx := context.Background()
    rsp, err := client.ListReleases(ctx, &pb.ListReleasesRequest{})

    if err != nil {
        log.Fatalf("ListReleases err: %v", err)

    releases := rsp.GetReleases()
    if len(releases) > 0 {

        fmt.Printf("Version\tRelease Date\tRelease Notes\n")
    } else {
        fmt.Println("No releases found")

    for _, ri := range releases {


After understanding the why and after doing an introduction on the how of gRPC, the next step would be to familiarize yourself with the official Go gRPC API.

A client connection can be configured by supplying DialOption functional option values to the grpc.Dial function and server configuration is done by supplying ServerOption functional option values to the grpc.NewServer function.

It’s not necessary for this article to understand what functional options are, but to read more about functional options look here - Dave Cheney and here - Rob Pike.

The API also has a concept called interceptors which basically makes it possible to add middleware functionality to both Unary (single request/response) and Streaming calls.

Interceptors are very useful to wrap functionality around a RPC call. It helps to separate things like logging/auth/monitoring/tracing from the logic of the RPC service and can help to create a uniform way (for example : logging) for each call in one place.

Other functionality that the API offer are things like the handling of messages with a different codec than Protocol Buffers (i.e FlatBuffers), enabling compression of message, control maximum message sizes, add headers and trailers, enabling tracing and even creating load balancing functionality (the Load Balancing API is still experimental)

Find the full documentation of the API here.

To showcase the use of the API let’s look at some use cases and build on top of our basic example above.

Securing our service

If we look at the grpc.NewServer function definition (func NewServer(opt ...ServerOption) *Server) we will see that it is a variadic function that accepts a variable number of grpc.ServerOption values.

To enable TLS for our service we need to use the grpc.Creds function which returns a grpc.ServerOption to send to the grpc.NewServer function.

Let’s look at the example.

Service code:

creds := credentials.NewTLS(&tls.Config{
    // TLS config values here

serverOption := grpc.Creds(credentials.NewTLS(tlsConfig))
server := grpc.NewServer(serverOption)

The code to create a tls.Config is standard Go. The real lines of code that’s of interest are the following:

serverOption := grpc.Creds(credentials.NewTLS(tlsConfig))
server := grpc.NewServer(serverOption)

The credentials.NewTLS() function construct a credentials.TransportCredentials value based on the tls.Config supplied. The grpc.Creds() funtional option takes the credentials.TransportCredentials value and sets credentials for server connections.

Now the gRPC server will accept TLS connections.

Let’s turn our focus to enabling TLS on the client side.

In the basic example we supply a grpc.WithInsecure() value to the grpc.Dial function. The grpc.Insecure() function returns a DialOption value which disables transport security for the client connection. By default, transport security is required so to disable transport security we need to set WithInsecure. But we want to enable TLS transport security. This is done with the grpc.WithTransportCredentials() function. Just like the grpc.Creds() function we used to enable transport security on the server side, the grpc.WithTransportCredentials() function also accepts a credentials.TransportCredentials but the difference is it returns a DialOption value and not a ServerOption value, and grpc.Dial function accepts DialOption values.

Client code:

creds := credentials.NewTLS(&tls.Config{
    //TLS Config values here

conn, err := grpc.Dial(*target, grpc.WithTransportCredentials(creds))
if err != nil {
    log.Fatalf("grpc.Dial err: %v", err)

Now our service are enabled with TLS to encrypt data sent over the wire.

There are many other options like message and buffer sizes and specifying a custom codec (something other than Protocol Buffer) available.To see what other options are available the API docs are your friend.

For more server side options, see ServerOption.

For more client side options, see DialOption.

Tip: DialOption values all have a With prefix, i.e grpc.WithTransportCredentials

For more resources on enabling TLS for gRPC in Go:

Using gRPC with Mutual TLS in Golang

Go Secure Hello World Example

Secure gRPC with TLS/SSL

Go gRPC Auth Support

Adding middleware

Like I said previously, the gRPC API has a concept called interceptors which enables us to write middleware functionality to our calls. To illustrate the use of interceptors we will write interceptors to add logging and basic authentication to our calls.

Interceptors can be created for both client and servers, and both support interceptors for Unary RPC calls as well as Streaming calls.

To create an interceptor you would need to create a function with a definition that matches the relevant type of interceptor you want to create. For example, if you want to create an Unary interceptor for your server, then based on the definitions below we would need to create a function with the same definition as UnaryServerInterceptor and supply that function to grpc.UnaryInterceptor() to create a ServerOption value that will be used to set the option for the server.

// DialOptions to set interceptors on the client side
func WithUnaryInterceptor(f UnaryClientInterceptor) DialOption
func WithStreamInterceptor(f StreamClientInterceptor) DialOption

// Client interceptor function definitions
type UnaryClientInterceptor func(ctx context.Context,
        method string, req, reply interface{},
        cc *ClientConn, invoker UnaryInvoker,
        opts ...CallOption) error
type StreamServerInterceptor func(srv interface{}, 
        ss ServerStream, 
        info *StreamServerInfo, 
        handler StreamHandler) error

// ServerOptions to set interceptors on the server side
func UnaryInterceptor(i UnaryServerInterceptor) ServerOption
func StreamInterceptor(i StreamServerInterceptor) ServerOption

// Server interceptor function defitions
type UnaryServerInterceptor func(ctx context.Context, 
        req interface{}, 
        info *UnaryServerInfo, 
        handler UnaryHandler) (resp interface{}, err error)
type StreamServerInterceptor func(srv interface{}, 
        ss ServerStream, 
        info *StreamServerInfo, 
        handler StreamHandler) error

The API documents every parameter but I want to quickly focus on how metadata is handled by interceptors. Metadata can be accessed by each interceptor via the context.Context value (for Unary calls) and ServerStream value (for Streaming calls). This is useful if we need to access the metadata (i.e authorization details) set in the call to authorize a call for example.

Enough with the theory, let’s implement our logging and authorization middleware.


// general unary interceptor function to handle auth per RPC call as well as logging
func unaryInterceptor(ctx context.Context, 
            req interface{}, 
            info *grpc.UnaryServerInfo, 
            handler grpc.UnaryHandler) (interface{}, error) {
    start := time.Now()

    //skip auth when ListReleases requested
    if info.FullMethod != "/proto.GoReleaseService/ListReleases" { 
        if err := authorize(ctx); err != nil {
            return nil, err

    h, err := handler(ctx, req)

    log.Printf("request - Method:%s\tDuration:%s\tError:%v\n", 

    return h, err

This function will be called with each incoming request before the actual service method is called. We can add general logging and use the different parameter values that get passed in to make decisions of our own like using the grpc.UnaryServerInfo value to exclude authorization checks for certain requests or use the context.Context value to extract metadata to check authorization like this:

// code from the authorize() function:
md, ok := metadata.FromIncomingContext(ctx)
if !ok {
    return status.Errorf(codes.InvalidArgument, "retrieving metadata failed")

elem, ok := md["authorization"]
if !ok {
    return status.Errorf(codes.InvalidArgument, "no auth details supplied")

To enable the interceptor on the server we supply a ServerOption that will set the server’s unary inceptor to our function called unaryInterceptor using the grpc.UnaryInterceptor() function.

// supply Transport credentials and UnaryInterceptor options
server := grpc.NewServer(

On the client side we will need to send the authorization details with the call. To do this we supply a DialOption to the grpc.Dial function using the grpc.WithPerRPCCredentials() functional option which expects a credentials.PerRPCCredentials value.

Below we have a struct type called basicAuthCreds which satisfy the credentials.PerRPCCredentials interface:

// basicAuthCreds is an implementation of credentials.PerRPCCredentials
// that transforms the username and password into a base64 encoded value similar
// to HTTP Basic xxx
type basicAuthCreds struct {
    username, password string

// GetRequestMetadata sets the value for "authorization" key
func (b *basicAuthCreds) GetRequestMetadata(context.Context, ...string) (map[string]string, error) {
    return map[string]string{
        "authorization": "Basic " + basicAuth(b.username, b.password),
    }, nil

// RequireTransportSecurity should be true as even though the credentials are base64, we want to have it encrypted over the wire.
func (b *basicAuthCreds) RequireTransportSecurity() bool {
    return true

//helper function
func basicAuth(username, password string) string {
    auth := username + ":" + password
    return base64.StdEncoding.EncodeToString([]byte(auth))

We then create a value for basicAuthCreds and then supply it to the grpc.WithPerRPCCredentials() functional option :

grpcAuth := &basicAuthCreds{
    username: *username,
    password: *password,

conn, err := grpc.Dial(*target,

When the call happens the gRPC client will now generate the basic auth credentials and add it to the calls’ metadata.


We have reached the end of our overview of the Go gRPC API and shown what is possible.

In summary, we made our basic server more secure and added middleware without having to change code in our basic service. What’s also nice is we can add more methods to our service which will automatically “inherit” the security and middleware functionality already created, we can just focus on the business logic required.

There are other behavior that can be changed via the API but will not go into detail now:

I encourage the reader to spend some time going over the API as well as it’s subdirectories.

Hopefully this is enough information to leverage the API and create your own specific implementation.

Go gRPC ecosystem

Let’s move towards looking at what is available in the wider gRPC ecosystem that serves as an extention to the official API.


If you recall in our example above all interceptor functionality (logging and auth) were contained in one interceptor. The API only allow one unary interceptor handler and one streaming interceptor handler for both client and servers.

This is where the go-grpc-middleware package come in very handy as it supplies functionality to chain interceptors into one interceptor:

streamingChain := grpc_middleware.ChainStreamServer(
unaryChain := grpc_middleware.ChainUnaryServer(
myServer := grpc.NewServer(

The gRPC Middleware package also have great ready-to-use interceptors for auth, logging (logrus, zap), monitoring (Prometheus ), tracing (OpenTracing), retries, server side validation etc.

For more info:

See also:

gRPC and the web

gRPC was mainly developed for services talking RPC with each other internally in a system. It also has great support for mobile clients talking to gRPC services but how does gRPC fit into existing web technologies?


gRPC Gateway is a great project if you already have gRPC services but your API need to be exposed as a traditional RESTful API. It includes a plugin to the protoc tool which generates a reverse-proxy server which translates a RESTful JSON API into gRPC. It also generates Swagger/API documentation.

For more info:


gRPC WebSocket Proxy is a proxy to transparently upgrade grpc-gateway streaming endpoints to use websockets. It enables bidirectional streaming of JSON payloads on top of grpc-gateway.

For more info:

gRPC Web

gRPC has support for several languages but it’s a common question as to where gRPC fit into the world of the web browser. It has support for server Javascript (Node.js), but what about client-side Javascript directly from the browser?

Enter gRPC-Web.

There’s an official specification for gRPC-Web, but there’s no official implementation, however Improbable created their own implementation based on the specification and is used in production at the moment. They also open sourced their solution which includes a client side implementation in Typescipt, protoc plugin and server side proxy in Go.

The blog on Day1 of the 2017 advent series have an excellent article on gRPC-Web and a great example to create a client in GopherJS.

For more info:


Thank you for reading this blog. Hopefully this blog helped you into diving deeper into the wonderful world of gRPC in Go. Although gRPC is considered a framework, the API gives us a flexible API to leverage and control behavior and to make our services robust and production ready.

If you have any feedback, remarks or questions you can send me a tweet @pieterlouw

The source code for the example can be found here.


This article is about using lexmachine to tokenize strings (split up into component parts) in the Go (golang) programming language. If you find yourself processing a complex file format or network protocol this article will walk you through how to use lexmachine to process both accurately and quickly. If you need more help after reading this article take a look at the documentation, a tutorial, or an explainer article on how it all works.

What is string tokenization?

String tokenization takes a piece of text and splits it into categorized substrings. When conducted as part of parsing or compiling a programming language it may also be called lexical analysis. Depending on the kind of text being processed splitting it up might be very simple or extremely complicated. For instance, some simple text formats (such as basic CSV or TSV files) separate each record by newline characters (\n or ASCII 0x0A). At the other end of the spectrum, natural language (such as English text) may be very difficult to correctly tokenize.

Let’s look at a quick example of tokenizing an English sentence.

Mary had a little lamb and its fleece was white as snow.

Each word will be a separate token. A token is a pair made up of the lexeme (or substring) and the token type (or category). When tokenizing English, often the token type will be the part of speech.

 Mary           had     a          little      lamb    and            ...
<Proper Noun>  <Verb>  <Article>  <Adjetive>  <Noun>  <Conjunction>   ...

Tokenizing English is very difficult in general because determining the part of speech of a particular word is not always obvious. The same word can play multiple semantic roles in a sentence. Humans consider the sentence in its entirety when determining the parts of speech for each word. This allows us to “noun verbs” and “verb nouns.”

String tokenization as part of parsing

There are many common applications of tokenization which are not as difficult as assigning parts of speech to words in English. For instance, computer languages, configuration files, log files, network protocols, data interchange formats, etc… are all easy to tokenize. In these applications (as with natural language), tokenization is the first step in parsing or compilation and is called lexical analysis.

Modern compilers are designed as a “pipeline” or series of steps (called passes) which operate over a program. They take the source code and transform it step by step into the desired output (machine code, assembly, another programming language). Breaking compilation into steps simplifies each stage and makes the individual passes reusable. The start of the pipeline is shown in Figure 1.

The pipeline of a compiler. Source Code - Lexing - Tokens - Parsing - AST -

Figure 1. The start of a compiler’s pipeline of passes.
A compiler starts with source code and then preforms lexical analysis to split the code up into tokens. Then, the parser transforms that stream or list of tokens into a structured format (often an Abstract Syntax Tree).

But why tokenize before parsing? Surely, the parser could operate on the bytes of the source code rather than on a list of tokens. Some parsers do operate directly on the bytes. However, the parser can be simplified and made shorter and often more robust by defining it over tokens rather than bytes.

Consider the following, programming languages often have keywords: func, return, if. They also have “identifiers” or names. The keywords are set aside as special names and are treated by a parser the same way other names are treated as they often denote the start of special syntactic regions. In this way, they really operate in a similar manner to punctuation in written English.

However, since keywords would be valid names if they were not reserved special care must be taken in a parser which operates on bytes directly not to accidentally classify a keyword as a name. Parsers which work on tokens rather than bytes can ignore this problem as it is handled at the lexical analysis stage.

Processing a custom configuration file format

As a motivating example for why you might need to perform a tokenization operation in your daily life as a programmer let’s take a look at processing a customer configuration file format. Linux has a library libsensor which allows it to read and understand the output of hardware sensors (such as CPU temperature probes). It has a custom configuration file format, sensors.conf, which describes how Linux should translate the raw readings hardware monitoring chips to real-world values – such as voltage and temperature. The first part of my laptop’s sensors.conf file begins with the following:

# It is recommended not to modify this file, but to drop your local
# changes in /etc/sensors.d/. File with names that start with a dot
# are ignored.

chip "lm78-*" "lm79-*" "lm80-*" "lm96080-*"

    label temp1 "M/B Temp"

chip "w83792d-*"

    label in0 "VcoreA"
    label in1 "VcoreB"
    label in6 "+5V"
    label in7 "5VSB"
    label in8 "Vbat"

    set in6_min  5.0 * 0.90
    set in6_max  5.0 * 1.10
    set in7_min  5.0 * 0.90
    set in7_max  5.0 * 1.10
    set in8_min  3.0 * 0.90
    set in8_max  3.0 * 1.10

Let’s pretend we want to extract some information from sensors.conf files. For instance, maybe we want to know for each chip what labels have been defined. Straightforward enough and definitely doable “by hand” if necessary but much easier if the file is preprocessed to pull out the tokens:

Type    | Lexeme
COMMENT | # It is recommended not to modify this file, but to drop your local
COMMENT | # changes in /etc/sensors.d/. File with names that start with a dot
COMMENT | # are ignored.
CHIP    | chip
NAME    | "lm78-*"
NAME    | "lm78-*"
NAME    | "lm79-*"
NAME    | "lm80-*"
NAME    | "lm96080-*"
LABEL   | label
NAME    | temp1
NAME    | "M/B Temp"
CHIP    | chip
NAME    | "w83792d-*"
SET     | set
NAME    | in8_max
FLOAT   | 3.0
STAR    | *
FLOAT   | 1.10

According to the man page the chip and label statements have the following structure (other bits left out):

#### *chip* Statement

A  chip  statement selects for which chips all following *compute*, *label*,
*ignore* and *set* statements are meant for.

#### *label* Statement

A  label statement describes how a feature should be called.

*chip* **NAME-LIST** <br/>
*label* **NAME** **NAME**

A  **NAME**  is  a string. If it only contains letters, digits and
underscores, it does not have to be quoted; in all other cases, you must use
double quotes around it.  Within quotes, you can use the normal escape-codes
from C.

A **NAME-LIST** is one or more **NAME** items behind each other, separated by

So to find all of the labeled “features” of a chip we need to do two things. First, find a chip statement and collect all the names associated with it. These names are actually patterns which match multiple chips but we will ignore that for this example. Second, collect all the label statements (and the two associated names). The labels go with the chips which immediately proceed them.

To implement the above idea we are going to use lexmachine to construct a lexer (or tokenizer) which split up the file into the appropriate tokens.

Defining a lexer with lexmachine

A lexer (or tokenizer) is defined by a set of categories (token types) which are defined by patterns. The patterns (in a traditional lexer) are expressed as regular expressions. As a quick and incomplete review, regular expressions (regex) are a “pattern” which describe a set of strings. A regex is made up of characters (a, b, 0, @, …) which combine with the operators: concatenation abc, alternation a|b, grouping a(b|c)d, and repetition a*. Some examples:

  • abc matches {"abc"}
  • a|b matches {"a", "b"}
  • a(b|c)d matches {"abd", "acd"}
  • a* matches {"", "a", "aa", …}
  • a(b(c|d))* matches {"a", "abc", "abd", "abcbc", "abcbd", …}

A fairly close reading of the man page for sensors.conf gives the following regular expressions to tokenize the file. (Note: [] define character classes which express a number of alternative characters to match)

  • AT: @
  • PLUS: \+
  • STAR: \*
  • DASH: -
  • SLASH: /
  • CARROT: \^
  • COMMA: ,
  • LPAREN: \(
  • RPAREN: \)
  • BUS: bus
  • CHIP: chip
  • LABEL: label
  • COMPUTE: compute
  • IGNORE: ignore
  • SET: set
  • NUMBER: [0-9]*\.?[0-9]+
  • COMMENT: #[^\n]*
  • SPACE: \s+
  • NAME: [a-zA-Z_][a-zA-Z0-9_]*
  • NAME: "((\\\\)|(\\.)|([^"\\]))*"

With these regular expressions in hand let’s define a lexer using lexmachine and use it to tokenize an example sensors.conf.

Aside, can’t we just use regexp?

A lexical analysis engine is very similar to a standard regular expression engine. However, the problem which regular expression engines solve (matching whole strings or finding patterns inside of strings) is slightly different than the lexial analysis problem. You could implement a lexical analyzer with the regexp package but it would be much slower than lexmachine. Lexical analysis engines are fast because they use the same theoretical framework as is used to implement regular expression engines with the following adjustments:

  1. Prefixes of a string are matched
  2. All patterns are matched at the same time
  3. The pattern which matches the longest prefix wins
  4. In case of ties, the pattern with the highest precedence wins

Constructing a *lexmachine.Lexer

Importing the packages

import (

Defining the tokens types

There are many ways to represent the token types (categories). In this example, the names of the types are defined as a list of strings. The types themselves (ids) will be the index of the names. An init function is used to construct a reverse mapping from name to id.

var tokens = []string{
var tokmap map[string]int

func init() {
	tokmap = make(map[string]int)
	for id, name := range tokens {
		tokmap[name] = id

Constructing a new lexer object

lexer := lexmachine.NewLexer()

Adding a single pattern

Patterns are added using the lexer.Add method. Let’s add the pattern for the chip keyword:

lexer.Add([]byte(`chip`), func(s *lexmachine.Scanner, m *machines.Match) (interface{}, error) {
	return 0, nil

The Add method takes two arguments. The first is the regular expression and the second is the lexing action function. The action is called on pattern discovery. This action typically takes a low-level *machines.Match object and turns it into a token representation of your (the user’s) choice.

For users who do not have strict requirements for how tokens are represented the *lexmachine.Token object provides a useful implementation. *lexmachine.Scanner has a utility method for constructing the tokens which helps us write a simple getToken action:

func getToken(tokenType int) lexmachine.Action {
	return func(s *lexmachine.Scanner, m *machines.Match) (interface{}, error) {
		return s.Token(tokenType, string(m.Bytes), m), nil

Adding all the patterns

func newLexer() *lexmachine.Lexer {
	lexer := lexmachine.NewLexer()
	lexer.Add([]byte("@"), getToken(tokmap["AT"]))
	lexer.Add([]byte(`\+`), getToken(tokmap["PLUS"]))
	lexer.Add([]byte(`\*`), getToken(tokmap["STAR"]))
	lexer.Add([]byte("-"), getToken(tokmap["DASH"]))
	lexer.Add([]byte("/"), getToken(tokmap["SLASH"]))
	lexer.Add([]byte("\\"), getToken(tokmap["BACKSLASH"]))
	lexer.Add([]byte(`\^`), getToken(tokmap["CARROT"]))
	lexer.Add([]byte("`"), getToken(tokmap["BACKTICK"]))
	lexer.Add([]byte(","), getToken(tokmap["COMMA"]))
	lexer.Add([]byte(`\(`), getToken(tokmap["LPAREN"]))
	lexer.Add([]byte(`\)`), getToken(tokmap["RPAREN"]))
	lexer.Add([]byte("bus"), getToken(tokmap["BUS"]))
	lexer.Add([]byte("chip"), getToken(tokmap["CHIP"]))
	lexer.Add([]byte("label"), getToken(tokmap["LABEL"]))
	lexer.Add([]byte("compute"), getToken(tokmap["COMPUTE"]))
	lexer.Add([]byte("ignore"), getToken(tokmap["IGNORE"]))
	lexer.Add([]byte("set"), getToken(tokmap["SET"]))
	lexer.Add([]byte(`[0-9]*\.?[0-9]+`), getToken(tokmap["NUMBER"]))
	lexer.Add([]byte(`[a-zA-Z_][a-zA-Z0-9_]*`), getToken(tokmap["NAME"]))
	lexer.Add([]byte(`"((\\.)|([^"\\]))*"`), func(s *lexmachine.Scanner, m *machines.Match) (interface{}, error) {
		return s.Token(tokmap["NAME"], string(m.Bytes[1:len(m.Bytes)-1]), m), nil
	lexer.Add([]byte(`#[^\n]*`), getToken(tokmap["COMMENT"]))
	lexer.Add([]byte(`( |\t|\f)+`), getToken(tokmap["SPACE"]))
	lexer.Add([]byte(`\\\n`), getToken(tokmap["SPACE"]))
	lexer.Add([]byte(`\n|\r|\n\r`), getToken(tokmap["NEWLINE"]))
	err := lexer.Compile()
	if err != nil {
	return lexer

Skipping a pattern

Sometimes it is advantageous to not emit tokens for certain patterns and to instead skip them. Commonly this occurs for whitespace and comments. To skip a pattern simply have the action return nil, nil:

func skip(scan *Scanner, match *machines.Match) (interface{}, error) {
	return nil, nil
lexer.Add([]byte(`\s+`), skip)     // skip whitespace
lexer.Add([]byte(`#[^\n]*`), skip) // skip comments

In our example for sensors.conf we only care about the keywords and the NAME tokens. Let’s skip the rest:

func newLexer() *lexmachine.Lexer {
	lexer := lexmachine.NewLexer()
	lexer.Add([]byte(`@`), skip)
	lexer.Add([]byte(`\+`), skip)
	lexer.Add([]byte(`\*`), skip)
	lexer.Add([]byte("-"), skip)
	lexer.Add([]byte("/"), skip)
	lexer.Add([]byte("\\"), skip)
	lexer.Add([]byte(`\^`), skip)
	lexer.Add([]byte("`"), skip)
	lexer.Add([]byte(","), skip)
	lexer.Add([]byte(`\(`), skip)
	lexer.Add([]byte(`\)`), skip)
	lexer.Add([]byte("bus"), skip)
	lexer.Add([]byte("chip"), getToken(tokmap["CHIP"]))
	lexer.Add([]byte("label"), getToken(tokmap["LABEL"]))
	lexer.Add([]byte("compute"), skip)
	lexer.Add([]byte("ignore"), skip)
	lexer.Add([]byte("set"), skip)
	lexer.Add([]byte(`[0-9]*\.?[0-9]+`), skip)
	lexer.Add([]byte(`[a-zA-Z_][a-zA-Z0-9_]*`), getToken(tokmap["NAME"]))
	lexer.Add([]byte(`"((\\.)|([^"\\]))*"`), func(s *lexmachine.Scanner, m *machines.Match) (interface{}, error) {
		return s.Token(tokmap["NAME"], string(m.Bytes[1:len(m.Bytes)-1]), m), nil
	lexer.Add([]byte(`#[^\n]*`), skip)
	lexer.Add([]byte(`( |\t|\f)+`), skip)
	lexer.Add([]byte(`\\\n`), skip)
	lexer.Add([]byte(`\n|\r|\n\r`), getToken(tokmap["NEWLINE"]))
	err := lexer.Compile()
	if err != nil {
	return lexer

Constructing the lexer exactly once

The lexer should be constructed and compiled exactly once: on program startup (unless the regular expressions are defined dynamically). This ensures that you only pay the compilation costs at program startup and get the maximum benefit from the efficient DFA representation produced by compilation.

var lexer *lexmachine.Lexer

func init() {
	lexer = newLexer()

Example tokenization

scanner, err := lexer.Scanner([]byte("chip chip1 chip2 label name value"))
if err != nil {
for tk, err, eof := scanner.Next(); !eof; tk, err, eof = scanner.Next() {
	if err != nil {
	token := tk.(*lexmachine.Token)
	fmt.Printf("%-7v | %-25q | %v:%v-%v:%v\n",


CHIP    | "chip"                    | 1:1-1:4
NAME    | "chip1"                   | 1:6-1:10
NAME    | "chip2"                   | 1:12-1:16
LABEL   | "label"                   | 1:18-1:22
NAME    | "name"                    | 1:24-1:27
NAME    | "value"                   | 1:29-1:33

For more details on the scanner interface consult the narrative documentation.

Associating labels with chips

There are several ways we could approach taking the output of the scanner above and using it to find what labels go with what chip names. In this example, a very simple, short, but ugly method is going to be demonstrated. Inside of the scanning for-loop a simple state machine will track what the current statement a particular NAME token belongs to. If it belongs to a chip statement or a label statement it is appropriately associated. Finally, whenever a new CHIP token is encountered the current buffer of labels are associated with the previous chip statement.

func printChipLabels(text []byte) error {
	scanner, err := lexer.Scanner(text)
	if err != nil {
		return err
	order := make([]string, 0, 10)
	chipLabels := make(map[string][]string)
	curChips := make([]string, 0, 10)
	curLabels := make([]string, 0, 10)
	curLabel := make([]string, 0, 2)
	state := "none"
	addChips := func() {
		for _, chip := range curChips {
			if _, has := chipLabels[chip]; !has {
				order = append(order, chip)
			chipLabels[chip] = append(chipLabels[chip], curLabels...)
		curChips = make([]string, 0, 10)
		curLabels = make([]string, 0, 10)
		curLabel = make([]string, 0, 2)
	for tk, err, eof := scanner.Next(); !eof; tk, err, eof = scanner.Next() {
		if err != nil {
			return err
		token := tk.(*lexmachine.Token)
		switch token.Type {
		case tokmap["CHIP"]:
			state = "chip"
		case tokmap["LABEL"]:
			state = "label"
		case tokmap["NAME"]:
			switch state {
			case "chip":
				curChips = append(curChips, token.Value.(string))
			case "label":
				curLabel = append(curLabel, token.Value.(string))
				if len(curLabel) >= 2 {
					curLabels = append(curLabels, strings.Join(curLabel, ":"))
					curLabel = make([]string, 0, 2)
		case tokmap["NEWLINE"]:
			state = "none"
	addChips() // close the final chip statement
	for _, chip := range order {
		fmt.Printf("chip %v: %v\n", chip, strings.Join(chipLabels[chip], ", "))
	return nil

Output for the sensors.conf file on my machine:

chip lm78-*: temp1:M/B Temp
chip lm79-*: temp1:M/B Temp
chip lm80-*: temp1:M/B Temp
chip lm96080-*: temp1:M/B Temp
chip w83792d-*: in0:VcoreA, in1:VcoreB, in6:+5V, in7:5VSB, in8:Vbat
chip w83793-*: in0:VcoreA, in1:VcoreB, in7:+5V, in8:5VSB, in9:Vbat
chip w83795g-*: in12:+3.3V, in13:3VSB, in14:Vbat
chip w83795adg-*: in12:+3.3V, in13:3VSB, in14:Vbat
chip via686a-*: in0:Vcore, in2:+3.3V, in3:+5V, in4:+12V
chip adm1025-*: in1:Vcore, in2:+3.3V, in3:+5V, in4:+12V, in5:VCC, temp1:CPU Temp, temp2:M/B Temp
chip ne1619-*: in1:Vcore, in2:+3.3V, in3:+5V, in4:+12V, in5:VCC, temp1:CPU Temp, temp2:M/B Temp
chip lm87-*: in1:Vcore, in2:+3.3V, in3:+5V, in4:+12V, temp1:M/B Temp, temp2:CPU Temp
chip adm1024-*: in1:Vcore, in2:+3.3V, in3:+5V, in4:+12V, temp1:M/B Temp, temp2:CPU Temp
chip it87-*: in8:Vbat
chip it8712-*: in8:Vbat
chip it8716-*: in8:Vbat
chip it8718-*: in8:Vbat
chip it8720-*: in8:Vbat
chip fscpos-*: in0:+12V, in1:+5V, in2:Vbat, temp1:CPU Temp, temp2:M/B Temp, temp3:Aux Temp
chip fscher-*: in0:+12V, in1:+5V, in2:Vbat, temp1:CPU Temp, temp2:M/B Temp, temp3:Aux Temp
chip fscscy-*: in0:+12V, in1:+5V, in2:+3.3V, temp1:CPU0 Temp, temp2:CPU1 Temp, temp3:M/B Temp, temp4:Aux Temp
chip fschds-*: temp1:CPU Temp, temp2:Super I/O Temp, temp3:System Temp, an1:PSU Fan, an2:CPU Fan, an3:System FAN2, an4:System FAN3, an5:System FAN4, in0:+12V, in1:+5V, in2:Vbat
chip fscsyl-*: temp1:CPU Temp, temp4:Super I/O Temp, temp5:Northbridge Temp, an1:CPU Fan, an2:System FAN2, an3:System FAN3, an4:System FAN4, an7:PSU Fan, in0:+12V, in1:+5V, in2:Vbat, in3:+3.3V, in5:+3.3V-Aux
chip vt1211-*: in5:+3.3V, temp2:SIO Temp
chip vt8231-*: in5:+3.3V
chip smsc47m192-*: in1:Vcore, in2:+3.3V, in3:+5V, in4:+12V, in5:VCC, temp1:SIO Temp
chip lm85-*: in1:Vcore, in2:+3.3V, in3:+5V, in4:+12V, temp2:M/B Temp
chip lm85b-*: in1:Vcore, in2:+3.3V, in3:+5V, in4:+12V, temp2:M/B Temp
chip lm85c-*: in1:Vcore, in2:+3.3V, in3:+5V, in4:+12V, temp2:M/B Temp
chip adm1027-*: in1:Vcore, in2:+3.3V, in3:+5V, in4:+12V, temp2:M/B Temp
chip adt7463-*: in1:Vcore, in2:+3.3V, in3:+5V, in4:+12V, temp2:M/B Temp
chip adt7468-*: in1:Vcore, in2:+3.3V, in3:+5V, in4:+12V, temp2:M/B Temp
chip emc6d100-*: in1:Vcore, in2:+3.3V, in3:+5V, in4:+12V, temp2:M/B Temp
chip emc6d102-*: in1:Vcore, in2:+3.3V, in3:+5V, in4:+12V, temp2:M/B Temp
chip emc6d103-*: in1:Vcore, in2:+3.3V, in3:+5V, in4:+12V, temp2:M/B Temp
chip emc6d103s-*: in1:Vcore, in2:+3.3V, in3:+5V, in4:+12V, temp2:M/B Temp
chip emc6w201-*: in2:+3.3V, in3:+5V, temp6:M/B Temp
chip pc87365-*: in7:3VSB, in8:VDD, in9:Vbat, in10:AVDD, temp3:SIO Temp
chip pc87366-*: in7:3VSB, in8:VDD, in9:Vbat, in10:AVDD, temp3:SIO Temp
chip adm1030-*: temp1:M/B Temp
chip adm1031-*: temp1:M/B Temp
chip w83627thf-*: in3:+5V, in7:5VSB, in8:Vbat
chip w83627ehf-*: in0:Vcore, in2:AVCC, in3:+3.3V, in7:3VSB, in8:Vbat
chip w83627dhg-*: in0:Vcore, in2:AVCC, in3:+3.3V, in7:3VSB, in8:Vbat
chip w83667hg-*: in0:Vcore, in2:AVCC, in3:+3.3V, in7:3VSB, in8:Vbat
chip nct6775-*: in0:Vcore, in2:AVCC, in3:+3.3V, in7:3VSB, in8:Vbat
chip nct6776-*: in0:Vcore, in2:AVCC, in3:+3.3V, in7:3VSB, in8:Vbat
chip w83627uhg-*: in2:AVCC, in3:+5V, in7:5VSB, in8:Vbat
chip f71805f-*: in0:+3.3V
chip f71872f-*: in0:+3.3V, in9:Vbat, in10:3VSB
chip k8temp-*: temp1:Core0 Temp, temp2:Core0 Temp, temp3:Core1 Temp, temp4:Core1 Temp
chip dme1737-*: in0:5VSB, in1:Vcore, in2:+3.3V, in3:+5V, in4:+12V, in5:3VSB, in6:Vbat, temp2:SIO Temp
chip sch311x-*: in1:Vcore, in2:+3.3V, in3:+5V, in4:+12V, in5:3VSB, in6:Vbat, temp2:SIO Temp
chip sch5027-*: in0:5VSB, in1:Vcore, in2:+3.3V, in5:3VSB, in6:Vbat, temp2:SIO Temp
chip sch5127-*: in2:+3.3V, in5:3VSB, in6:Vbat
chip f71808e-*: in0:+3.3V, in7:3VSB, in8:Vbat
chip f71808a-*: in0:+3.3V, in7:3VSB, in8:Vbat
chip f71862fg-*: in0:+3.3V, in7:3VSB, in8:Vbat
chip f71869-*: in0:+3.3V, in7:3VSB, in8:Vbat
chip f71869a-*: in0:+3.3V, in7:3VSB, in8:Vbat
chip f71882fg-*: in0:+3.3V, in7:3VSB, in8:Vbat
chip f71889fg-*: in0:+3.3V, in7:3VSB, in8:Vbat
chip f71889ed-*: in0:+3.3V, in7:3VSB, in8:Vbat
chip f71889a-*: in0:+3.3V, in7:3VSB, in8:Vbat
chip f71858fg-*: in0:+3.3V, in1:3VSB, in2:Vbat
chip f8000-*: in0:+3.3V, in1:3VSB, in2:Vbat
chip f81865f-*: in0:+3.3V, in5:3VSB, in6:Vbat
chip adt7473-*: in2:+3.3V, temp2:Board Temp
chip adt7475-*: in2:+3.3V, temp2:Board Temp
chip adt7476-*: in1:Vcore, in2:+3.3V, in3:+5V, in4:+12V, temp2:M/B Temp
chip adt7490-*: in1:Vcore, in2:+3.3V, in3:+5V, in4:+12V, temp2:M/B Temp


This article presented lexmachine a library that helps you break up complex strings into their component parts safely, reliably, and quickly. To learn more visit the project page, browse the documentation, look at another tutorial, or read an explainer article on how it all works.


Go is widely used to implement microservices and APIs. And for those wishing to set up a dynamic website without resorting to, say, Ruby or PHP, Go offers a lot of tools out of the box. The use of net/http and html/templates can get you very far already.

As soon as a user needs to be identified across multiple HTTP requests, you need to start thinking about web sessions. They can be thought of as storage units assigned to a user, which persist across requests. Some implementations store data in an encrypted cookie, others such as JSON Web Tokens (JWT) are often held in local browser storage.

The most common method is to store a token, or session ID, in a browser cookie. Based on that token, the server then loads the session data from a data store. Over the years, a number of best practices have evolved that make cookie-based web sessions reasonably safe. The OWASP organization lists a number of recommendations aimed at reducing common attacks such as session hijacking or session fixation. Unfortunately, many Go packages for web sessions leave it up to the user to implement these recommendations or don’t even provide the tools to do it.

Web Sessions

We present github.com/rivo/sessions, a Go package designed for cookie-based web sessions which implements OWASP recommendations. Its usage is quite simple:

func MyHandler(response http.ResponseWriter, request *http.Request) {
  session, err := sessions.Start(response, request, true)
  if err != nil {
  fmt.Fprintln(response, "We have a session")

Now you can already store data which will be available during subsequent HTTP requests:

session.Set("cart", items)
// ...
items := session.Get("cart", nil)
if items == nil {
  fmt.Println("No")

The sessions package takes care of everything for you in the background: sessions expire after a period of inactivity, session tokens are regenerated regularly, and remote IP addresses as well as user agent strings are examined for unauthorized changes, leading to session invalidation. All of these options can be customized to your needs. Naturally, you can also connect any session store of your choice.

For many interactives websites, instead of saving data in the session directly, you may wish to step up one abstraction level and attach a User object to the session:

// Attach a user to the session.
session.LogIn(user, true, response)

// Remove the user from the session:

User is an interface so it imposes no specific structure on your existing user model (other than requiring a unique user ID). But help is provided with additional functions such as sessions.CUID(), a function that generates compact unique identifiers suitable for users, or sessions.ReasonablePassword() which implements NIST SP 800-63B guidelines for passwords.

Once you start implementing user account handling, you realize there are a lot of procedures that are common to most websites. In the interest of making these user accounts secure, it is probably not advisable to reinvent the wheel each time.

Common User Workflows

To facilitate the creation of websites with user accounts, we present github.com/rivo/users, a Go package which implements the following functions:

  • User sign up and email verification
  • Logging into a user account
  • Logging out of a user account
  • Checking if a user is logged in
  • Forgotten password and password reset
  • Email and password change

Like github.com/rivo/sessions, it is not a framework but rather a collection of tools you can integrate into your existing application. It is also somewhat opinionated, in that users are identified by their email addresses which no one else has access to. It therefore requires that you have access to an SMTP email server. If your application follows a different model, you will probably not be able to use this package out of the box. But it may still be a good start to implement your own user workflows.

The users package follows a number of rules:

  • New user accounts or accounts whose email address was changed must be verified by clicking on a link sent per email.
  • Authentication requires the user’s email and password.
  • It must not be possible to find out if a specific email address belongs to an existing user account.
  • Password strength is checked with session.ReasonablePassword() (see above).
  • Forgotten passwords are reset by clicking on a temporary link emailed to the user.
  • Users are in exactly one of three states: created, verified, and expired.

Using the package is very easy:

if err := users.Main(); err != nil {

This will start an HTTP server with handlers for the pages listed above. You can add your own handlers to the http.DefaultServeMux before the call to users.Main(). The forms produced by those pages look like this (you will, however, need to provide your own CSS):

Forms of the github.com/rivo/users package

Alternatively, to start your own HTTP server, you can add all the package’s handlers yourself, simply by copying the implementation of the users.Main() function into your own code:

http.HandleFunc(users.Config.RouteSignUp, users.SignUp)
http.HandleFunc(users.Config.RouteVerify, users.Verify)
http.HandleFunc(users.Config.RouteLogIn, users.LogIn)
http.HandleFunc(users.Config.RouteLogOut, users.LogOut)
http.HandleFunc(users.Config.RouteForgottenPassword, users.ForgottenPassword)
http.HandleFunc(users.Config.RouteResetPassword, users.ResetPassword)
http.HandleFunc(users.Config.RouteChange, users.Change)

if err := http.ListenAndServe(users.Config.ServerAddr, nil); err != nil {

The package’s handlers use Golang templates to generate the HTML pages and the emails sent to the users. The HTML templates provided with the package contain the minimum HTML code to make the handlers work. When starting to work with this package, you will want to make a copy and adjusts the templates to the needs of your application. Support for internationalization is also included.

The users.Config variable holds a large number of configuration parameters allowing you to customize the package to your needs. You may choose any database for your user objects. (The default is a pure RAM store.) And just as in the github.com/rivo/sessions package, the User type is an interface with the functions needed by this package so you can bring your own user type.


If you are planning to implement a dynamic website using Go, the two packages github.com/rivo/sessions and github.com/rivo/users can save you a lot of time. The business logic of secure web sessions and common user workflows can be deceivingly complex. Our goal is to provide these functions without imposing an entire framework, so you can focus on your core application.

Of course, there are cases where these two packages may not be useful. For example, if you don’t use cookies to identify your sessions, most functions don’t apply. If your application runs on multiple servers, you may be able to use these packages with load balancers that implement sticky sessions but distributed sessions don’t come out of the box. And obviously, if your user model is very different from the model presented above, the users package may not help.

At the time of writing, both packages have just been released to the public. We would like to hear your feedback and if you encounter any problems or have suggestions, feel free to open issues on GitHub.


What is ANTLR?

ANTLR (ANother Tool for Language Recognition), is an ALL(*) parser generator. In layman’s terms, Antlr, creates parsers in a number of languages (Go, Java, C, C#, Javascript), that can process text or binary input. The generated parser provides a callback interface to parse the input in an event-driven manner, which can be used as-is, or used to build parse trees (a data structure representing the input).

ANTLR is used by a number of popular projects, e.g Hive and Pig use it to parse Hadoop queries, Oracle and NetBeans uses it for their IDEs, and Twitter even uses it to understand search queries. Support was recently added so that ANTLR 4 can be used to generate parsers in pure Go. This article will explain some of the benefits of ANTLR, and walk us through a simple example.

Why use it?

It is possible to hand write a parser, but this process can be complex, error prone, and hard to change. Instead there are many parser generators that take a grammar expressed in an domain- specific way, and generates code to parse that language. Popular parser generates include bison and yacc. In fact, there is a version of yacc, goyacc, which is written in Go and was part of the main go repo until it was moved to golang.org/x/tools last year.

So why use ANTLR over these?

  • ANTLR has a suite of tools, and GUIs, that makes writing and debugging grammars easy.

  • It uses a simple EBNF syntax to define the grammar, instead of a bespoke configuration language.

  • ANTLR is an Adaptive LL(*) parser, ALL(*) for short, whereas most other parser generators (e.g Bison and Yacc) are LALR. The difference between LL(*) and LALR is out of scope for this article, but simply LALR works bottom-up, and LL(*) works top-down. This has a bearing on how the grammar is written, making some languages easier or harder to express.

  • The generated code for a LL(*) parser is more understandable than a LALR parser. This is because LALR parsers are commonly table driven, whereas LL(*) parsers encode the logic in its control flow, making it more comprehensible.

  • Finally ANTLR is agnostic to the target language. A single grammar can be used to generate parsers in Java, Go, C, etc. Unlike Bison/Yacc which typically embeds target language code into the grammar, making it harder to port.

Installing ANTLR v4

ANTLR is a Java 1.7 application, that generates the Go code needed to parse your language. During development Java is needed, but once the parser is built only Go and the ANTLR runtime library is required. The ANTLR site has documentation on how to install this on multiple platforms, but in brief, you can do the following:

$ wget http://www.antlr.org/download/antlr-4.7-complete.jar
$ alias antlr='java -jar $PWD/antlr-4.7-complete.jar'

The antlr command is now available in your shell. If you prefer, the .jar file can be placed into a ~/bin directory, and the alias can be stored in your ~/.bash_profile.

Classic calculator example

Let’s start with the “hello world” for parsers, the calculator example. We want to build a parser that handles simple mathematical expressions such as 1 + 2 * 3. The focus of this article is on how to use Go with ANTLR, so the syntax of the ANTLR language won’t be explained in detail, but the ANTLR site has compressive documentation.

As we go along, the source is available to all examples.

// Calc.g4
grammar Calc;

// Tokens
MUL: '*';
DIV: '/';
ADD: '+';
SUB: '-';
NUMBER: [0-9]+;
WHITESPACE: [ \r\n\t]+ -> skip;

// Rules
start : expression EOF;

   : expression op=('*'|'/') expression # MulDiv
   | expression op=('+'|'-') expression # AddSub
   | NUMBER                             # Number

The above is a simple grammar split into two sections, tokens, and rules. The tokens are terminal symbols in the grammar, that is, they are made up of nothing but literal characters. Whereas rules are non- terminal states made up of tokens and/or other rules.

By convention this grammar must be saved with a filename that matches the name of the grammar, in this case “Calc.g4” . To process this file, and generate the Go parser, we run the antlr command like so:

$ antlr -Dlanguage=Go -o parser Calc.g4 

This will generate a set of Go files in the “parser” package and subdirectory. It is possible to place the generated code in a different package by using the -package <name> argument. This is useful if your project has multiple parsers, or you just want a more descriptive package name for the parser. The generated files will look like the following:

$ tree
├── Calc.g4
└── parser
    ├── calc_lexer.go
    ├── calc_parser.go
    ├── calc_base_listener.go
    └── calc_listener.go

The generated files consist of three main components, the Lexer, Parser, and Listener.

The Lexer takes arbitrary input and returns a stream of tokens. For input such as 1 + 2 * 3, the Lexer would return the following tokens: NUMBER (1), ADD (+), NUMBER (2), MUL (*), NUMBER (3), EOF.

The Parser uses the Lexer’s output and applies the Grammar’s rules. Building higher level constructs, such as expressions that can be used to calculate the result.

The Listener then allows us to make use of the the parsed input. As mentioned earlier, yacc requires language specific code to be embedded with the grammar. However, ANTLR separates this concern, allowing the grammar to be agnostic to the target programming language. It does this through use of listeners, which effectively allows hooks to be placed before and after every rule is encountered in the parsed input.

Using the Lexer

Let’s move onto an example of using this generated code, starting with the Lexer.

// example1.go
package main

import (


func main() {
	// Setup the input
	is := antlr.NewInputStream("1 + 2 * 3")

	// Create the Lexer
	lexer := parser.NewCalcLexer(is)

	// Read all tokens
	for {
		t := lexer.NextToken()
		if t.GetTokenType() == antlr.TokenEOF {
		fmt.Printf("%s (%q)\n",
			lexer.SymbolicNames[t.GetTokenType()], t.GetText())

To begin with, the generated parser is imported from the local subdirectory import "./parser". Next the Lexer is created with some input:

	// Setup the input
	is := antlr.NewInputStream("1 + 2 * 3")

	// Create the Lexer
	lexer := parser.NewCalcLexer(is)

In this example the input is a simple string, "1 + 2 * 3" but there are other antlr.InputStreams, for example, the antlr.FileStream type can read directly from a file. The InputStream is then passed to a newly created Lexer. Note the name of the Lexer is CalcLexer which matches the grammar’s name defined in the Calc.g4.

The lexer is then used to consume all the tokens from the input, printing them one by one. This wouldn’t normally be necessary but we do this for demonstrative purposes.

 	for {
		t := lexer.NextToken()
		if t.GetTokenType() == antlr.TokenEOF {
		fmt.Printf("%s (%q)\n",
			lexer.SymbolicNames[t.GetTokenType()], t.GetText())

Each token has two main components, the TokenType, and the Text. The TokenType is a simple integer representing the type of token, while the Text is literally the text that made up this token. All the TokenTypes are defined at the end of calc_lexer.go, with their string names stored in the SymbolicNames slice:

// calc_lexer.go
const (
	CalcLexerMUL        = 1
	CalcLexerDIV        = 2
	CalcLexerADD        = 3
	CalcLexerSUB        = 4
	CalcLexerNUMBER     = 5
	CalcLexerWHITESPACE = 6

You may also note, that the Whitespace token is not printed, even though the input clearly had whitespace. This is because the grammar was designed to skip (i.e. discard) the whitespace WHITESPACE: [ \r\n\t]+ -> skip;.

Using the Parser

The Lexer on its own is not very useful, so the example can be modified to also use the Parser and Listener:

// example2.go
package main

import (

type calcListener struct {

func main() {
	// Setup the input
	is := antlr.NewInputStream("1 + 2 * 3")

	// Create the Lexer
	lexer := parser.NewCalcLexer(is)
	stream := antlr.NewCommonTokenStream(lexer, antlr.TokenDefaultChannel)

	// Create the Parser
	p := parser.NewCalcParser(stream)

	// Finally parse the expression
	antlr.ParseTreeWalkerDefault.Walk(&calcListener{}, p.Start())

This is very similar to before, but instead of manually iterating over the tokens, the lexer is used to create a CommonTokenStream, which in turn is used to create a new CalcParser. This CalcParser is then “walked”, which is ANTLR’s event-driven API for receiving the results of parsing the rules.

Note, the Walk function does not return anything. Some may have expected a parsed form of the expression to be returned, such as some kind of AST (abstract syntax tree), but instead the Listener receives event as the parsing occurs. This is similar in concept to SAX style parsers for XML. Event-based parsing can sometimes be harder to use, but it has many advantages. For example, the parser can be very memory efficient as previously parsed rules can be discarded once they are no longer needed. The parser can also be aborted early if the programmer wishes to.

But so far, this example doesn’t do anything beyond ensuring the input can be parsed without error. To add logic, we must extend the calcListener type. The calcListener has an embedded BaseCalcListener, which is a helper type, that provides empty methods for all those defined in in the CalcListener interface. That interface looks like:

// parser/calc_listener.go
// CalcListener is a complete listener for a parse tree produced by CalcParser.
type CalcListener interface {

	// EnterStart is called when entering the start production.
	EnterStart(c *StartContext)

	// EnterNumber is called when entering the Number production.
	EnterNumber(c *NumberContext)

	// EnterMulDiv is called when entering the MulDiv production.
	EnterMulDiv(c *MulDivContext)

	// EnterAddSub is called when entering the AddSub production.
	EnterAddSub(c *AddSubContext)

	// ExitStart is called when exiting the start production.
	ExitStart(c *StartContext)

	// ExitNumber is called when exiting the Number production.
	ExitNumber(c *NumberContext)

	// ExitMulDiv is called when exiting the MulDiv production.
	ExitMulDiv(c *MulDivContext)

	// ExitAddSub is called when exiting the AddSub production.
	ExitAddSub(c *AddSubContext)

There is an Enter and Exit function for each rule found in the grammar. As the input is walked, the Parser calls the appropriate function on the listener, to indicate when the rule starts and finishes being evaluated.

Adding the logic

A simple calculator can be constructed from this event driven parser by using a stack of values. Every time a number is found, it is added to a stack. Everytime an expression (add/multiple/etc) is found, the last two numbers on the stack are popped, and the appropriate operation is carried out. The result is then placed back on the stack.

Take the expression 1 + 2 * 3, the result could be either (1 + 2) * 3 = 9, or 1 + (2 * 3) = 7. Those that recall the order of operations, will know that multiplication should always be carried out before addition, thus the correct result is 7. However, without the parentheses there could be some ambiguity on how this should be parsed. Luckily the ambiguity is resolved by the grammar. The precedence of multiplication over addition was subtly implied within Calc.g4, by placing the MulDiv expressed before the AddSub expression.

The code for a listener that implements this stack of value implementation is relatively simple:

type calcListener struct {

	stack []int

func (l *calcListener) push(i int) {
	l.stack = append(l.stack, i)

func (l *calcListener) pop() int {
	if len(l.stack) < 1 {
		panic("stack is empty unable to pop")

	// Get the last value from the stack.
	result := l.stack[len(l.stack)-1]

	// Remove the last element from the stack.
	l.stack = l.stack[:len(l.stack)-1]

	return result

func (l *calcListener) ExitMulDiv(c *parser.MulDivContext) {
	right, left := l.pop(), l.pop()

	switch c.GetOp().GetTokenType() {
	case parser.CalcParserMUL:
		l.push(left * right)
	case parser.CalcParserDIV:
		l.push(left / right)
		panic(fmt.Sprintf("unexpected op: %s", c.GetOp().GetText()))

func (l *calcListener) ExitAddSub(c *parser.AddSubContext) {
	right, left := l.pop(), l.pop()

	switch c.GetOp().GetTokenType() {
	case parser.CalcParserADD:
		l.push(left + right)
	case parser.CalcParserSUB:
		l.push(left - right)
		panic(fmt.Sprintf("unexpected op: %s", c.GetOp().GetText()))

func (l *calcListener) ExitNumber(c *parser.NumberContext) {
	i, err := strconv.Atoi(c.GetText())
	if err != nil {


Finally this listener would be used like so:

// calc takes a string expression and returns the evaluated result.
func calc(input string) int {
	// Setup the input
	is := antlr.NewInputStream(input)

	// Create the Lexer
	lexer := parser.NewCalcLexer(is)
	stream := antlr.NewCommonTokenStream(lexer, antlr.TokenDefaultChannel)

	// Create the Parser
	p := parser.NewCalcParser(stream)

	// Finally parse the expression (by walking the tree)
	var listener calcListener
	antlr.ParseTreeWalkerDefault.Walk(&listener, p.Start())

	return listener.pop()

Following the algorithm, the parsing of 1 + 2 * 3 would work like so.

  1. The numbers 2 and 3 would be visited first (and placed on the stack),
  2. Then the MulDiv expression would be visited, taking the values 2 and 3, multiplying them, and placing the result, 6, back on the stack.
  3. Then the number 1 would visited and pushed onto the stack.
  4. Finally AddSub would be visited, popping the 1 and the 6 from the stack, placing the result 7 back.

The order the rules are visited is completely driven by the Parser, and thus the grammar.

More grammars

Learning how to write a grammar may be daunting, but there are many resources for help. The author of ANTLR, Terence Parr, has published a book, with some of the content freely available on antlr.org.

If you don’t want to write your own grammar, there are many pre-written grammars available. Including grammars for CSS, HTML, SQL, etc, as well many popular programming languages. To make it easier, I have generated parsers for all those available grammars, making them as easy to use just by importing.

A quick example of using one of the pre-generated grammars:

import (
	"bramp.net/antlr4/json" // The parser


type exampleListener struct {
	// https://godoc.org/bramp.net/antlr4/json#BaseJSONListener

func main() {
	// Setup the input
	is := antlr.NewInputStream(`
			"example": "json",
			"with": ["an", "array"]

	// Create the JSON Lexer
	lexer := json.NewJSONLexer(is)
	stream := antlr.NewCommonTokenStream(lexer, antlr.TokenDefaultChannel)

	// Create the JSON Parser
	p := json.NewJSONParser(stream)

	// Finally walk the tree
	antlr.ParseTreeWalkerDefault.Walk(&exampleListener{}, p.Json())


Hopefully this article has given you a taste of how to use ANTLR with Go. The examples for this article are found here, and the godoc for the ANTLR library is here which explains the various InputStream, Lexer, Parser, etc interfaces.

If you have any questions or comments, please reach out to me at @TheBramp or visit my website and blog, bramp.net for more articles.


Useful Programs

I’ll start this post with a bold but fairly uncontroversial statement: Programs that do not interact with the outside world are useless - they do nothing except consume cycles. A corollary of this is that pure functional programming is useless. Here is a (mostly) pure functional program written in Go that does absolutely nothing:

package foo

func incr(i int) int { return i+1 }
func decr(i int) int { return i-1 }
func foo()           { decr(incr(0)) }

While obviously there are other things that make a program useful, the most fundamental thing that makes a program useful is that it needs to interact with things outside itself. This can come in the form of reading a file, reading a network input, reading user input, printing an output, or writing to a file. Indeed, so fundamental is input/output to the notion of programming that most languages have a standardized definition for a program entry point. In Go, it’s the func main(){} function.

Another feature about useful programs that they have to be robust. Here, the notion of fragile programs can be induced by an example - imagine a program that breaks down at every other input that is fed to it. It’s not very useful, is it?

Lastly, a program has to do what we intend for it to do in order to be useful. If I want a program to add two numbers together, a useless program would be one that plays music instead of calculating the sum of two numbers.

To recap, a useful program has to:

  • Deal with I/O
  • Be robust to various inputs
  • Do the correct thing

Mo’ Inputs Mo’ Problems

We’ve established that inputs and outputs are what makes programs useful. For the rest of this post, I’ll talk mostly about inputs. And to spare readers of the philosophical conundrum that may occur, I’ll limit to talking about inputs of things that are controllable by the programmer. In practice, this means inputs to functions.

The problem with inputs is that there can be many many different inputs. Let’s say we define a function Add. Add takes two inputs, a and b and spits out a result. We’ll write it like this: Add(a, b). Some combinations of a and b will not make sense: adding a Duck and a Flower does not make sense - they’re two different types of things!

So now there’s an intuition that in order for programs to return meaningful results, there are different types of things at play - we should write define our functions to be explicit about what things are being accepted. This is typically the first line defence - the type system. But this post is about testing, not about type systems. So why bring it up? Most modern type systems have an escape hatch, allowing programmers to subvert it, either for I/O purposes or performance purposes.

Clearly a type system helps. But if our job is to write a robust program, we would need more help than just a type system.

Testing 101

Recall the three things that a useful program requires. The last one is “Do the correct thing”. A good type system would prevent a lot of wrong inputs from being entered to a program (preferably at compile time). But the actual actions that your program does could actually be wrong. Consider this function:

func Add(a, b int) int { return a - b }

According to the type system it’s correct. It takes two int inputs, and outputs an int. But it’s doing the wrong thing! I wouldn’t want this function to be anywhere near my accounting software! This is why we write tests - to ensure that the code does what it’s meant to do.

Go supports testing as a first class citizen of the language. To write tests, you simply write a function that looks like this:

func TestAdd(t *testing.T) {
	if out := Add(1, 2); out != 3 {
		t.Errorf("Add failed. Expected 3. Got %d instead", out)

Then run go test. It just works.

The Case Of Perverse Incentives

Testing is great. It ensures that many people can work on the same project without stepping (much) on each others’ toes. But blind insistence on testing can lead to poor software.

Quite a while back, when I was running my own startup, I’d hire freelancers to help out with my workload. I’d require tests to be written, to ensure at least minimum quality of work. At first everything went fine. Eventually bugs were found in the algorithms written by my freelancer. It had been determined that it was a whole class of edge cases that I had never anticipated. I’d send the code back, and it’d get fixed, tests included. Now the following is entirely my fault - I had reviewed the code entirely too cursorily. If Travis-CI is green, I’d approve and merge the changes in.

After a while, I started noticing that the class of edge cases I thought he’d fixed were coming back. That’s when I went back to the code only to discover that the tests have passed because the algorithm was fixed to only include the specific edge case I had specified, and nothing else.

Along the way I discovered other things that made me really quite upset (mostly at myself for not reviewing the code carefully). Here’s what happened. Imagine I asked you to test the Add function from above. What is one valid way to make sure it always passes? Here’s one way:

func TestAdd(t *testing.T) {
	a := 1
	b := 2
	out := Add(a, b)
	if out != a + b {
		t.Errorf("Add failed...")

If you can’t tell what’s wrong, here’s a less subtle version of it:

func TestAdd(t *testing.T) {
	a := 1
	b := 2
	out := Add(a, b)
	if out != Add(a, b) {
		t.Errorf("Add failed...")

If you’re testing function Foo, you can’t use Foo to verify that Foo works! That’s just nuts!

Testing On More Inputs

“Be robust to various inputs” means that you have to test for a lot of different inputs. OK codebases will have tests for the happy path, where everything goes well. Better code bases will have test cases where the program fails - the so-called tragic path - to test the ability of the program to handle failure conditions correctly. Being able to correctly handle failure conditions is the prerequisite to robust software.

Testing along the tragic path is not easy. For many functions and programs it’s difficult to reason out where it may go wrong. In Go, this is somewhat alleviated by making errors a value. Even so, a codebase may often choose to ignore error checking:

func returnsError() error { return errors.New("This Error") }

func foo() {
	returnsError() // bad! Don't do this! Always check for and handle errors!

But let’s say you are a diligent programmer, and you checked all the errors, other forms of errors may occur too: panics from slice indices being out of bounds, etc.

So an idea emerges: why don’t we test on all possible values? Or at least as many values as we can?

Enter the notion of fuzz testing: we feed the program random inputs, and then watch for it to fail. Fuzz testing often leads one to find subtle bugs you don’t expect. Here’s one that Chris Marshall discovered on my skiprope library.

Here’s another, from a random number generator library I wrote for Gorgonia, where a Kahan summation algorithm yielded a different result from expected due to the way errors in floats are handled.

I have to this day, no idea what the correct fixes are. Solutions welcome.

A More Disciplined Approach

Fuzz testing works by throwing everything at the wall and seeing what sticks. It’s extremely useful for finding bugs you hadn’t reasoned out yet. That typically falls under the purview of “be robust to various inputs” part of building useful programs. But it’s not good enough to test what inputs work and what inputs cause a panic. You also want to test that those inputs are doing the correct things.

“Doing the correct things” means thinking deeply about what your program is supposed to do. You gotta be honest with yourself - if you are testing an Add function on integers, you can’t use Add or + to verify the results. That’s the circular logic trap that many fall into. I find myself also occasionally falling into the same trap due to laziness.

Testing for results is tedious anyway - you have to know that the result of 1 + 2 is 3. And that’s for the simple stuff. Imagine having to know the answers ahead of time for random input for more complicated programs.


Instead of doing that, we can adopt a different approach to testing. Instead of asking “what is the expected result of this program given these example inputs?”, we ask the question: “what is the property of the result and program that doesn’t change given inputs?”

All that sounds a bit abstract. But applying it on the Add function that we’ve become so familiar - what doesn’t change in Add?

Well, we know through much real life experience that a + b is equal to b + a. Mathematicians call this property commutativity. So test for it. Write a test Add, that takes two pairs of random inputs, perform them once, and perform them with the operands’ orders switched. They should have the same result.

Of course, you could have a test that reads something like this:

func TestAdd(t *testing.T) {
	// a function that checks for commutativity
	comm := func(fn func(a, b int) int) bool {...} 
	if !comm(Add) {
		t.Error("Not Commutative")

func Add(a, b int) int { return a * b } // PEBKAC error

The test above will pass. But it’d be doing the wrong thing. Here you need to be asking, what makes Add different from Mul? Or more generally: What makes any function that has a signature func(a, b int) int different from Add?

There are obviously a few guidelines, which I will list below, but before that, do have a think.

The point I’m trying to make here is that there are multiple properties to be tested for every program. You can’t just test one property and be done with it.

How To Do Property Based Testing In Go

So, with the Whys out of the way, let’s talk about How to do property based testing in Go. Go actually comes with a built in property testing tool, "testing/quick". Given that property based testing is based on Haskell’s QuickCheck, naturally the testing function is called quick.Check. The gist of how it works is simple: write your property test as a function, pass it into the quick.Check function, and Bob’s your uncle.

Here’s how to test for commutativity with "testing/quick":

import "testing/quick"

func TestAdd(t *testing.T) {
	// property test
	comm := func(a, b int) bool {
		if Add(a, b) != Add(b, a) {
			return false
		return true
	if err := quick.Check(comm, nil); err != nil {

Here we see that comm was defined rather simply: if Add(a, b) != Add(b, a) then we say that the Add function does not fulfil the commutativity property.

What the quick.Check function then does is it generates values based on the input arguments of the testing function. In this case, comm is a function that takes two int. The package understands that it needs to generate two int to be fed into the function.

The package works for non-primitive types too. Further extensions to functionality can be had by types that implement the quick.Generator interface.

Take for example, a coordinate:

type Point struct {
	x, y int

As long as Point implements quick.Generator, you can put it in the input of the property testing function, like so:

func (Point) Generate(r *rand.Rand, size int) reflect.Value {
	p := Point{}
	p.x = rand.Int()
	p.y = rand.Int()
	return reflect.ValueOf(p)

This is especially useful for types where there are unexported properties. If all your fields are exported, testing/quick can typically generate values for them.

Because Go doesn’t have value-constrained types (aka ranged types in Ada, for example), it may be instructive to want to test on a subset of values, like ints that are even. I’d go so far to say create separate tests for different classes of ranged values.

How To Think About Properties

So we’ve now been briefly introduced to the world of properties: there is now a notion that there are some properties that functions and programs have. Here are some properties that an addition function for numbers should have:

  • Commutativity: Add(a, b) == Add(b, a)
  • Associativity: Add(a, Add(b, c)) == Add(Add(a, b), c)
  • Identity: Add(a, IDENTITY) == a.

Let’s talk about the Identity property. The Identity property states that calling the function with any value and the identity will always yield the same value. When you think about that, the IDENTITY of Add is 0. Conversely the IDENTITY value of Mul is 1 because any number multiplied by 1 will always be that number itself. This is what separates Add and Mul.

This opens a new avenue for us to think about properties. Often properties are described in scary mathematical language, like “Commutativity”, or “Idempotency” or “Invariants”. But look closer, and we’ll find hints on how to think about creating properties to test for.

I’ve actually mentioned this earlier, but it bears repeating: what doesn’t change? In the first two property above, commutativity and associativity, the results don’t change when the order of operations are changed. In the identity property, the value itself doesn’t change.

The trick to thinking of properties to test for is to figure out what does not change under various situations. The jargon frequently used is “invariant”.

For example, let’s say you invented a new sort algorithm. What properties that all sort algorithms have to hold in order to be useful? Well, for one, the container size should not change. Imagine a sorting algorithm that sorts a list and the result has fewer or more elements than when it started. Surely something fishy is happening there.

As another example, recently I wrote a parser. What are the properties of a parser? One thing I tested was that I was able to retrieve the input from the parser - that is to say, I’m able to make the input and output the same. That test found a bunch of bugs in the parsing logic that I have yet to fix.

Good Properties To Test

Some properties are better than other properties. For example, let’s go back to the Add example. We know from real life experience, that when you add two numbers together, the result is bigger than either operands.

Except that isn’t true. What if one of the operands were negative numbers?

OK, so we decide that our Add function will only work on positive numbers only. Certainly, adding two positive numbers will result in a number that is larger than either operand.

That is true for all positive numbers - we can consider this as a property of addition on unsigned integers. In real life though, we have to deal with machines and data types that have physical limitations. Consider this:

func Add(a, b byte) byte { return a + b  }

What happens when you do this: Add(255, 1)? The result isn’t larger! Because of overflows the result will be 1. Now it can be quite difficult to figure out if an overflow has happened (unless if you are writing in assembly, then just check for the overflow flag). So, maybe this isn’t a great property to test.

PBT You Probably Already Do

One of the interesting things about property based testing is that you probably already do some of it. Take the encoding/gob package for example. The EncodeDecode example in the gob package is a very restricted form of property based testing - it’s not testing as many inputs as possible, but it wouldn’t be difficult to add a generator to do that.

Think about the properties that the gob package is trying to test. It’s testing for the fact that the data does not change after the encoding/decoding process.

What Is Property Based Testing

Given the descriptions of the above, it’s easy to figure out what property-based testing, and what it isnt’. Property-based testing is all about testing that a program satisfies the specified properties, across a wide variety of input. Properties are usually abstract notions of something that is invariant.

Usually a property based testing library comes with generators to generate input values to test. But that’s just a technical part. Theoretically you could do property-based testing if you provide your own corpus of test inputs. I once consulted for a media company. Because of the vast library available, we could just use those as inputs to test (videos make for good random inputs).

A notion many people get confused about is that property-based testing has to be exactly like what QuickCheck does. Anything that doesn’t implement combinators and use a HM-ish type system therefore aren’t property-based testing. While I will say that there are some nice things that exist in QuickCheck due to those, the idea of property-based testing does not require them.

Real World Property Based Testing

All the text above is quite dry and hypothetical in nature. You might be thinking, “but I don’t write functions that are commutative or associative”. Rest assured that property based testing is useful in real life programming. Real life property based testing is also a lot less strict in definition in what a property is.

The Gorgonia Tensor package that provides fast-ish generic multidimensional slices. It has fairly complicated machinery to make it fast and generic enough for deep learning work (if you’re interested, I laid out the foundations in a separate post).

It’s generic to both data type (float64, float32 etc), execution engines (CPU, GPU, or even the cloud) and execution options (whether to perform operations in place, etc). This leaves a LOT of possible code paths. The more code paths there are, the more chances of error there are.

For that package I use property based testing to check the assumptions that I have. Here for example, is a snippet for checking that Log10 works.

func TestLog10(t *testing.T) {
	var r *rand.Rand

	// default function operation
	invFn := func(q *Dense) bool {
		a := q.Clone().(*Dense)
		correct := a.Clone().(*Dense)
		we, willFailEq := willerr(a, floatTypes, nil)
		_, ok := q.Engine().(Log10er)
		we = we || !ok

		// we'll exclude everything other than floats
		if err := typeclassCheck(a.Dtype(), floatTypes); err != nil {
			return true
		ret, err := Log10(a)

		// ok is a bool indicating it's ok to return early
		if err, ok := qcErrCheck(t, "Log10", a, nil, we, err); ok {
			if err != nil {
				return false
			return true

		ten := identityVal(10, a.Dtype())
		Pow(ten, ret, UseUnsafe())

		cd := correct.Data()
		rd := ret.Data()
		if !qcEqCheck(t, a.Dtype(), willFailEq, cd, rd) {
			return false
		return true
	r = rand.New(rand.NewSource(time.Now().UnixNano()))
	if err := quick.Check(invFn, &quick.Config{Rand: r}); err != nil {
		t.Errorf("Inv tests for Log10 failed: %v", err)

Allow me to walk through the test. The test is testing the function Log10. The property of the function that this test performs is that the inverse function should yield a result that is the input. In other words, we want to prove that Log10 works by saying that 10 ^ (Log10(x)) == x.

Because there are some helper functions, here’s the line-by-line walkthrough:

  • invFn := func(q *Dense) bool - this line defines the property we’re testing.
  • correct := a.Clone().(*Dense) - the end result should be the same as the input.
  • we, willFailEq := willerr(a, floatTypes, nil) - Sometimes values may be inaccessible (for example, values in a GPU can only be accessed in the GPU. Attempting to access values via Go will cause the program to crash). Fortunately these information would be known upfront. If the generator generates an inaccessible value, then we know that the function will return an error. Furthermore, some functions may not return an error, but the values cannot be compared for equality.
  • _, ok := q.Engine().(Log10er) - if the associated execution engine cannot perform Log10, it will return an error. Again, more upfront information we’d need to know from what was generated.
  • if err := typeclassCheck(a.Dtype(), floatTypes) - because the tensor package is fairly generic, there would be data types of which Log10 wouldn’t make sense - string for example. In this case, we’d get rid of any generated *Dense tensors that aren’t float types.
  • ret, err := Log10(a) - actually perform the Log10 operation
  • err, ok := qcErrCheck(t, "Log10", a, nil, we, err); ok - check that if the operation errors as expected, or created no errors. If there were errors, and it’s safe to return early (i.e. an error was indeed expected), the function will return early.
  • ten := identityVal(10, a.Dtype()) - create a value that is a representation of 10, in the Dtype provided.
  • Pow(ten, ret, UseUnsafe()) - perform 10 ^ ret. This is the inverse operation of Log10. UseUnsafe is a function option of Gorgonia’s tensor, which allows the operation to be done in-place so additional memory wouldn’t have to be allocated.
  • if !qcEqCheck(t, a.Dtype(), willFailEq, cd, rd) - check that the result is the same.

It should be noted that the code contains some bad practices - willFailEq will always be True in the code above. But in the checking code (qcEqCheck), if the data type is actually a float type (float32, float64 etc), the willFailEq will be ignored, and a float approximate-equality check will be used instead. Floats are treated differently - because floats are used a lot more in machine learning, and there is a greater imperative that they be tested more thoroughly.

As you can see there is quite a bit of set up for a fairly complex machinery. The upside is that there are many thousands of values tested and I’m fairly sure that for a large percentage of inputs, the function wouldn’t break.


As with anything, there are some pitfalls to property based testing. For one, property based testing doesn’t exist in a vacuum. I don’t want readers walking away from the post thinking that property-based testing is the end-all of writing robust software.

Instead I think it’s a combination of various types of testing (traditional unit testing, fuzz testing, property based testing, integration testing), type systems and sound practices (code review, etc).

However, by itself, property based testing is not exhaustive. If you only test for one property of the function you are testing, you only test for that one property. When you test multiple properties of a function, you are approaching a more complete testing for the semantic correctness of the program.

There are also some edge cases with the notion of property based testing that needs some understanding. Take for example, the equality testing of tensors in Gorgonia:

func TestDense_Eq(t *testing.T) {
	eqFn := func(q *Dense) bool {
		a := q.Clone().(*Dense)
		if !q.Eq(a) {
			t.Error("Expected a clone to be exactly equal")
			return false

		// Bools are excluded because the probability 
		// of having an array of all false is very high
		if q.Eq(a) && a.len() > 3 && a.Dtype() != Bool {
			t.Errorf("a %v", a.Data())
			t.Errorf("q %v", q.Data())
			t.Error("Expected *Dense to be not equal")
			return false
		return true
	conf := &quick.Config{Rand: newRand(), MaxCount: 2000}
	if err := quick.Check(eqFn, conf); err != nil {
		t.Errorf("Failed to perform equality checks")

Here we are doing two tests (which I rolled into one):

  1. Any clone of the *Dense has to be equal with its original. This test runs on the assumption that the Clone method works correctly - this is a very loose form of reflexitivity
  2. Any *Dense would be not be equal to a *Dense that is filled with its zero value (with the exception that the original is all zeroes).

Here we run into the edge case of probabilities: it’s highly probable that the generator generates []bool{False, False, False} - so if that condition is met, then we just say the test passes.

This is a sign of poor thinking. I clearly didn’t devote enough time and effort into thinking about the properties of an equality test across all possible types. To be clear, here are the things that should have been tested:

  • Transitivity - a == b and b == c implies a == c
  • Symmetry - a == b implies b == a
  • Reflexivity - a == a. This was originally done without using Clone. Reflexivity is all about testing of values. If a.Eq(a) is done, the Eq function would note that they’re the same object, and return true. Instead, a clone, which is a different object but has the same value is used.

These tests were written for most of the comparison operators, but not for equality. I attribute the problem firstly to my bad habit of selectively choosing what to test, followed by undermining the purpose of property-based tests. If you find yourself writing a lot of exceptions to the rules of the property that you’re testing, maybe it’s time to rethink the test.

Advanced Libraries

One of the nice things that Haskell’s QuickCheck does that Go’s built in "testing/quick" doesn’t have is shrinking. Shrinking is the idea that the library is able to find the minimum reproductible test case for the test to fail. And as you can see with my real world example, I was pushing "testing/quick" to the limits, with a lot of weird looking escape hatches for my code.

Enter GOPTER. It does shrinking, and has many helpers for stateful tests. It also comes with some code generators to help with writing tests.

While I have played with GOPTER once or twice, I haven’t had an opportunity to actually use GOPTER in a real world setting. The biggest benefit I see of GOPTER over "testing/quick" is shrinking. The problem is most of my code deals with mathematical primitives - when an error occurs, I don’t need a shrinker, I can automatically deduce what went wrong in my head. I aim to actually use GOPTER a lot more than "testing/quick" in the coming year.


In this post, I introduced the notion of a useful program, how we would test for some correctness, and the different tools available before we come to the meat of the article: property based testing. I then gave an overview of how to do basic property testing, a real world example, and some pitfalls. Finally I showcased an advanced library for property based testing. If you would like to read more, here are some real world projects that use property based testing to good effect - it was very instructive for me to study them to see how other people do property based testing:

  • time
  • miekg/dns
  • The standard library is actually quite littered with testing/quick based tests.

Lastly, please feel free to reach out to me if there are any questions. I’m almost always available to help.

About The Author

Chewxy is an author of the Gorgonia suite of libraries. He has a vested interest in making Go the de facto language for machine learning and AI related things (mainly because Go is the right balance between high development speed and high maintainability). He’s passionate about natural language processing, linguistics and statistics. Follow @chewxy on Twitter.


In this post, I will tell a story of how I had to build a custom JSON unmarshaler for the needs of a GraphQL client library in Go. I’ll start with the history of how everything started, build motivation for why a custom JSON marshaler was truly needed, and then describe how it was implemented. This is going to be a long journey, so strap yourself in, and here we go!

History of GraphQL Client Library in Go

GraphQL is a data query language for APIs, developed internally by Facebook in 2012, and made publicly available in 2015. It can be used as a replacement for, or in addition to REST APIs. It some ways, it offers significant advantages compared to REST APIs, making it an attractive option. Of course, as any newer technology, it’s less mature and has some weaknesses in certain areas.

In May of 2017, I set out to build the first GraphQL client library for Go. Back then, only GraphQL server libraries existed in Go. My main motivation was wanting to be able to access GitHub GraphQL API v4 from my Go code, which had just come out of early access back then. I also knew that a general-purpose GraphQL client library would be useful, enabling Go projects to access any GraphQL API. There were GraphQL clients available in other languages, and I didn’t want Go users to be missing out.

I spent a week on the initial investigation and research into what a Go client for GraphQL could look like. GraphQL is strongly typed, which is a good fit for Go. However, it also has some more advanced query syntax and features that play better with more dynamic languages, so I had my share of concerns whether a good client in Go would even be viable. Fortunately, at the end of that week, I found that a reasonable client in Go was indeed possible, and pushed a working initial prototype that had most basic functionality implemented, with a plan for how to implement and address the remaining features.

I documented the history of my findings and design decisions made in this issue. I’ve also given a talk (slides here) about the rationale and thinking behind many of the API and design decisions that went into the library. In this post, I want to talk about something I haven’t covered before: implementing a custom JSON unmarshaler specifically for the needs of the GraphQL client library, in order to improve support for GraphQL unions.

JSON Unmarshaling Task at Hand

Unmarshaling JSON into a structure is a very common and well understood problem. It’s already implemented inside the encoding/json package in Go standard library. Given that JSON is such a well specified standard, why would anyone need to implement their own JSON unmarshaler?

To answer that, I need to provide a little context about how GraphQL works. The GraphQL client begins by sending a request containing a GraphQL query, for example:

query {
	me {

The GraphQL server receives it, processes it, and sends a JSON-encoded response for that query. The response contains a data object and potentially other miscellaneous fields. We’re primarily interested in the data object, which looks like this:

	"me": {
		"name": "gopher",
		"bio": "The Go gopher."

Notice it has the same shape as the query. That’s why the graphql package was designed so that to make a query, you start by defining a Go struct variable. That variable then both defines the GraphQL query that will be made, and gets populated with the response data from the GraphQL server:

var query struct {
	Me struct {
		Name string
		Bio  string
err := client.Query(ctx, &query, nil)
if err != nil {
	// Handle error.

// Output:
// gopher
// The Go gopher.

Initially, encoding/json was used for unmarshaling the GraphQL response into the query structure. It worked well for most things, but there were some problems.

Motivation for Custom JSON Unmarshaler

There were at least 3 clear problems with encoding/json for unmarshaling GraphQL responses into the query structure. These served as motivation to write a custom JSON unmarshaler for graphql needs.

  1. The largest problem with using encoding/json became apparent when looking to support the GraphQL unions feature. In GraphQL, a union is a type of object representing many objects.

    query {
        mascot(language: "Go") {
            ... on Human {
            ... on Animal {

    In this query, we’re asking for information about Go’s mascot. We don’t know in advance what exact type it is, but we know what types it can be. Depending on whether it’s an Animal or Human, we ask for additional fields of that type.

    To express that GraphQL query, you can create the following query struct:

    var query struct {
        Mascot struct {
            Human struct {
                Name   string
                Height float64
            } `graphql:"... on Human"`
            Animal struct {
                Name    string
                HasTail bool
            } `graphql:"... on Animal"`
        } `graphql:"mascot(language: \"Go\")"`

    The JSON-encoded response from GraphQL server will contain:

        "mascot": {
            "name": "Gopher",
            "hasTail": true

    You can see that in this case the shape of the response doesn’t quite align with the query struct. GraphQL inlines or embeds the fields from Animal into the “mascot” object. The encoding/json unmarshaler will not be able to handle that in the way we’d want, and the fields in the query struct will be left unset. See proof on the playground.

    You could try to work around it by using Go’s embedded structs. If you define query as:

    type Human struct {
        Name   string
        Height float64
    type Animal struct {
        Name    string
        HasTail bool
    } `graphql:"... on Animal"`
    var query struct {
        Mascot struct {
            Human  `graphql:"... on Human"`  // Embedded struct.
            Animal `graphql:"... on Animal"` // Embedded struct.
        } `graphql:"mascot(language: \"Go\")"`

    That gets you almost the right results, but there’s a significant limitation at play. Both Human and Animal structs have a field with the same name, Name.

    According to the encoding/json unmarshaling rules:

    If there are multiple fields at the same level, and that level is the least nested (and would therefore be the nesting level selected by the usual Go rules), the following extra rules apply:

    1. Of those fields, if any are JSON-tagged, only tagged fields are considered, even if there are multiple untagged fields that would otherwise conflict.

    2. If there is exactly one field (tagged or not according to the first rule), that is selected.

    3. Otherwise there are multiple fields, and all are ignored; no error occurs.

    Multiple fields are ignored. So, Name would be left unset. See proof on the playground.

    An initial reaction might be that it’s a bug or flaw in encoding/json package and should be fixed. However, upon careful consideration, this is a very ambiguous situation, and there’s no single clear “correct” behavior. The encoding/json unmarshaler makes a sensible compromise for generic needs, not GraphQL-specific needs.

  2. To have additional control over the GraphQL query that is generated from the query struct, the graphql struct field tag can be used. It allows overriding how a given struct field gets encoded in the GraphQL query. Suppose the user happens to use a field with a name that doesn’t match that of the GraphQL field:

    var query struct {
        Me struct {
            Photo string `graphql:"avatarUrl(width: 194, height: 180)"`

    The JSON-encoded response from GraphQL server will contain:

        "me": {
            "avatarUrl": "https://golang.org/doc/gopher/run.png"

    As a result, query.Me.Photo will not be populated, since the field is “avatarUrl” in the response, and the Go struct has a field named “Photo”, which doesn’t match.

    This happens because encoding/json unmarshaler is unaware of the graphql struct field tags.

  3. Consider if the user supplied a query struct that happened to contain json struct field tags, for example:

    type query struct {
        Me struct {
            Name string `json:"full_name"`

    (Suppose the user wants to serialize the response later, or uses some struct that happens to have json tags defined for other reasons.)

    The JSON-encoded response from GraphQL server will contain:

        "me": {
            "name": "gopher"

    As a result, query.Me.Name will not be populated, since the Go struct has a JSON tag calling it “full_name”, but the field is “name” in the response, which doesn’t match.

    This happens because encoding/json unmarshaler is affected by json struct field tags.

This motivation lead to the conclusion that for GraphQL-specific needs, a custom JSON unmarshaler is unavoidably needed.

Implementing a Custom JSON Unmarshaler

Discarding a well written, thoroughly tested, battle proven JSON unmarshaler in the Go standard library and writing one from scratch is not a decision to be taken lightly. I spent considerable time looking at my options and comparing their trade-offs.

Writing it from scratch would’ve been the last option to consider. I could’ve made a fork of encoding/json and modified it. But that would mean having to maintain a fork of encoding/json and keep it up to date with any upstream changes.

The key insight was that the process of JSON unmarshaling consists of two independent parts: parsing JSON, and populating the target struct fields with the parsed values. The JSON that GraphQL servers respond with is completely standard, specification-compliant JSON. I didn’t need to make any changes there. It was only the behavior of populating target struct fields that I needed to customize.

In Go 1.5, the encoding/json package exposed a JSON tokenizer API to the outside world. A JSON tokenizer parses JSON and emits a sequence of JSON tokens, which are higher-level and easier to work with compared to the original byte stream. I could make use of this to avoid having to parse the JSON myself.

The encoding/json JSON tokenizer is available via the Token method of json.Decoder struct:

// Token returns the next JSON token in the input stream.
// At the end of the input stream, Token returns nil, io.EOF.
// ...
func (dec *Decoder) Token() (Token, error)

Calling Token repeatedly on an input like this:

	"Message": "Hello",
	"Array": [1, 2, 3],
	"Number": 1.234

Produces this sequence of JSON tokens, followed by io.EOF error:

json.Delim: {
string: "Message"
string: "Hello"
string: "Array"
json.Delim: [
float64: 1
float64: 2
float64: 3
json.Delim: ]
string: "Number"
float64: 1.234
json.Delim: }
io.EOF error

Great! We don’t have to deal with all the low-level nuances of parsing JSON strings, escaped characters, quotes, floating point numbers, and so on. We’ll be able to reuse the JSON tokenizer from encoding/json for all that. Now, we just need to build our unmarshaler on top of it.

Let’s start by defining and iterating on our decoder struct that contains the necessary state. We know we’re going to base it on a JSON tokenizer. To make it very clear we’re only ever using the Token method and nothing else from json.Decoder, we can make it a small interface. This is our starting point:

// decoder is a JSON decoder that performs custom unmarshaling
// behavior for GraphQL query data structures. It's implemented
// on top of a JSON tokenizer.
type decoder struct {
	tokenizer interface {
		Token() (json.Token, error)

And the exported unmarshal function will look like this:

// UnmarshalGraphQL parses the JSON-encoded GraphQL response
// data and stores the result in the GraphQL query data
// structure pointed to by v.
// The implementation is created on top of the JSON tokenizer
// available in "encoding/json".Decoder.
func UnmarshalGraphQL(data []byte, v interface{}) error {
	dec := json.NewDecoder(bytes.NewReader(data))
	err := (&decoder{tokenizer: dec}).Decode(v)
	return err

We create a new JSON decoder around data, which will act as our JSON tokenizer. Then we decode a single JSON value into v, and return error, if any.

Pop Quiz: Is there a difference in behavior between unmarshaling and decoding a single JSON value?

Here’s a pop quiz. Suppose you have some JSON data and and you’re looking to unmarshal it into a Go variable. You could do one of two things:

err := json.Unmarshal(data, &v)
err := json.NewDecoder(r).Decode(&v)

They have slightly different signatures; json.Unmarshal takes a []byte while json.NewDecoder accepts an io.Reader. We know that the decoder is meant to be used on streams of JSON values from a reader, but if we only care about reading one JSON value, is there any difference in behavior between them?

In other words, is there an input for which the two would behave differently? If so, what would such an input be?

This was something I didn’t quite know the answer to, not before I set out on this journey. But now it’s very clear. Yes, the behavior indeed differs: it differs in how the two handle the remaining data after the first JSON value. Decode will read just enough to decode the JSON value and stop there. Unmarshal will do the same, but it doesn’t stop there; it continues reading to check there’s no extraneous data following the first JSON value (other than whitespace). If there are any additional JSON tokens, it returns an “invalid token after top-level value” error.

To stay true to unmarshaling behavior, we perform a check to ensure there are no additional JSON tokens following our top-level JSON value; if there is, that’s an error:

func UnmarshalGraphQL(data []byte, v interface{}) error {
	dec := json.NewDecoder(bytes.NewReader(data))
	err := (&decoder{tokenizer: dec}).Decode(v)
	if err != nil {
		return err
	tok, err := dec.Token()
	switch err {
	case io.EOF:
		// Expect to get io.EOF. There shouldn't be any more
		// tokens left after we've decoded v successfully.
		return nil
	case nil:
		return fmt.Errorf("invalid token '%v' after top-level value", tok)
		return err

Ok, now let’s figure out all the remaining state we need to keep track of in the decoder struct. We will implement unmarshaling with an iterative algorithm rather than recursive, and keep all relevant state in decoder struct.

We know that the JSON tokenizer provides us with one token at a time. So, it’s up to us to track whether we’re in the middle of a JSON object or array. Imagine you get a string token. If the preceding token was [, then this string is an element of an array. But if the preceding token was {, then this string is the key of an object, and the following token will be its value. We’ll use parseState json.Delim to track that.

We’ll also keep a reference to the value where we want to unmarshal JSON into, say, a v reflect.Value field (short for “value”).

What we have so far is:

type decoder struct {
	tokenizer interface {
		Token() (json.Token, error)

	// What part of input JSON we're in the middle of:
	// '{' is object, '[' is array. Zero value means neither.
	parseState json.Delim

	// Value where to unmarshal.
	v reflect.Value

That’s a good start, but what happens when we encounter a ] or } token? That means we leave the current array or object, and… end up in the parent, whatever that was, if any.

JSON values can be nested. Objects inside arrays inside other objects. We will change parseState to be a stack of states parseState []json.Delim. Whenever we get to the beginning of a JSON object or array, we push to the stack, and when we get to end, we pop off the stack. Top of the stack is always the current state.

We need to apply the same change to v, so we know where to unmarshal into after end of array or object. We’ll also make it a stack and rename to vs []reflect.Value (short for “values”).

Now we have something that should be capable of unmarshaling deeply nested JSON values:

type decoder struct {
	tokenizer interface {
		Token() (json.Token, error)

	// Stack of what part of input JSON we're in the middle of:
	// '{' is object, '[' is array. Empty stack means neither.
	parseState []json.Delim

	// Stack of values where to unmarshal. The top of stack
	// is the reflect.Value where to unmarshal next JSON value.
	vs []reflect.Value

That should be enough for now. Let’s look at the code for unmarshaling next.

Remember that the UnmarshalGraphQL function calls decoder.Decode method. Decode will accept v, set up the decoder state, and call decode, where the actual decoding logic will take place.

// Decode decodes a single JSON value from d.tokenizer into v.
func (d *decoder) Decode(v interface{}) error {
	rv := reflect.ValueOf(v)
	if rv.Kind() != reflect.Ptr {
		return fmt.Errorf("cannot decode into non-pointer %T", v)
	d.vs = []reflect.Value{rv.Elem()}
	return d.decode()

decode is implemented as an iterative algorithm that uses the state in decoder struct. This is the entire algorithm at a high level:

// decode decodes a single JSON value from d.tokenizer into d.vs.
func (d *decoder) decode() error {
	// The loop invariant is that the top of the d.vs stack
	// is where we try to unmarshal the next JSON value we see.
	for len(d.vs) > 0 {
		tok, err := d.tokenizer.Token()
		if err == io.EOF {
			return io.ErrUnexpectedEOF
		} else if err != nil {
			return err

		// Process the token. Potentially decode a JSON value,
		// or handle one of {, }, [, ] tokens.
		switch tok := tok.(type) {
	return nil

There’s a big outer loop. At the top of the loop, we call d.tokenizer.Token to get the next JSON token. The loop invariant is that the top of the vs stack is where we unmarshal the next JSON value we get from Token. The loop condition is len(d.vs) > 0, meaning we have some value to unmarshal into. When the vs stack becomes empty, that means we’ve reached the end of the JSON value we’re decoding, so we break out and return nil error.

Each loop iteration makes a call to Token and processes the token:

  • If it’s a value, it’s unmarshaled into the value at the top of vs stack.
  • If it’s an opening of an array or object, then the parseState and vs stacks are pushed to.
  • If it’s the ending of an array or object, those stacks are popped.

That’s basically it. The rest of the code are the details, managing the parseState and vs stacks, checking for graphql struct field tags, handling all the error conditions, etc. But the algorithm is conceptually simple and easy to understand at this high level.

Except… We’re still missing one critical aspect of making it handle the GraphQL-specific needs that we set out to resolve originally.

Let’s recall the GraphQL unions example, where the JSON-encoded GraphQL server response contained:

	"mascot": {
		"name": "Gopher",
		"hasTail": true

And we’re trying to unmarshal it into:

var query struct {
	Mascot struct {
		Human struct {
			Name   string
			Height float64
		} `graphql:"... on Human"`
		Animal struct {
			Name    string
			HasTail bool
		} `graphql:"... on Animal"`
	} `graphql:"mascot(language: \"Go\")"`

The behavior we want is to unmarshal “Gopher” string into all matching fields, which are these two:

  • query.Mascot.Human.Name
  • query.Mascot.Animal.Name

But the top of our vs stack only contains one value… What do we do?

We must go deeper. Cue the music from Inception, and get ready to replace vs []reflect.Value with vs [][]reflect.Value!

Multiple Stacks of Values

That’s right, to be able to deal with having potentially multiple places to unmarshal a single JSON value into, we have a slice of slices of reflect.Values. Essentially, we have multiple (1 or more) []reflect.Value stacks. decoder now looks like this:

type decoder struct {
	tokenizer interface {
		Token() (json.Token, error)

	// Stack of what part of input JSON we're in the middle of:
	// '{' is object, '[' is array. Empty stack means neither.
	parseState []json.Delim

	// Stacks of values where to unmarshal. The top of each stack
	// is the reflect.Value where to unmarshal next JSON value.
	// The reason there's more than one stack is because we
	// might be unmarshaling a single JSON value into multiple
	// GraphQL fragments or embedded structs, so we keep track
	// of them all.
	vs [][]reflect.Value

We need to modify decode to create additional stacks whenever we encounter an embedded struct or a GraphQL fragment (field with graphql:"... on Type" tag), do some additional bookkeeping to manage multiple stacks of values, check for additional error conditions if our stacks run empty. Aside from that, the same algorithm continues to work.

I think getting the data structure to contain just the right amount of information to resolve the task was the most challenging part of getting this to work. Once it’s there, the rest of the algorithm details fall into place.

If you’d like to learn even more of the low-level details of the implementation, I invite you to look at the source code of package github.com/shurcooL/graphql/internal/jsonutil. It should be easy to read now.


Let’s quickly revisit our original GraphQL unions example that wasn’t working with standard encoding/json unmarshaler. When we replace json.UnmarshalJSON with jsonutil.UnmarshalGraphQL, the Name fields get populated! That’s good news, it means we didn’t do all that work for nothing.

See proof on the playground.

jsonutil.UnmarshalGraphQL also takes graphql struct field tags into account when unmarshaling, and doesn’t get misled by json field tags. Best part is we’re reusing the rigorous JSON tokenizer of encoding/json and its public API, so no need to deal with maintaining a fork. If a need to apply further GraphQL-specific changes to unmarshaling behavior arises in the future, it will be easy to do so.


It has been a lot of fun implementing the GraphQL client library for Go, and trying to make the best API design decisions. I enjoyed using the tools that Go gives me to tackle this task. Even after using Go for 4 years, it’s still the absolutely most fun programming language for me to use, and I’m feeling same joy I did back when I was just starting out!

I’m finding GraphQL to be a pretty neat new technology. Its strongly typed nature is a great fit for Go. APIs that are created with it can be a pleasure to use. Keep in mind that GraphQL shines most when you’re able to replace multiple REST API calls with a single carefully crafted GraphQL query. This requires high quality and completeness of the GraphQL schema, so not all GraphQL APIs are made equal.

Note that there are two GraphQL client packages to choose from:

I’ve had a chance to actually use githubql for real tasks in some of my Go projects, and it was a pleasant experience. That said, their GraphQL API v4 is still missing many things present in GitHub REST API v3, so I couldn’t do as much with it as I would’ve liked. They’re working on expanding it, and it’ll be even better when fully complete.

If you want to play around with GraphQL or take a stab at creating your own API with it, you’ll need a GraphQL server library. I would suggest considering the github.com/neelance/graphql-go project as a starting point (if you want a complete list of options, see here). Then, you can use any GraphQL client to execute queries, including the graphql package from this post.

If you run into any issues, please report in the issue tracker of the corresponding repository. For anything else, I’m @shurcooL on Twitter.

Happy holidays, and enjoy using Go (and GraphQL) in the upcoming next year!


One of the common criticisms of the GOPATH is that it is hard to create isolated development environments without resorting to hacks, bash scripts, multiple GOPATH settings, or other trickery. Generally, I don’t often have too many problems with GOPATH, but when I do they are frustrating and hard to figure out. An example:

Your dependency manager copies deps from your GOPATH into your project’s vendor folder. But the dependency it copies is your fork, not the upstream, so the vendored package isn’t what you expected.

There are many ways to solve this problem, and the solutions depend greatly on your personal preferences for editing files. I’m going to outline my solution, which is interesting technically, but may not suit your workflow. I don’t claim to have tried all possible solutions, either, but I will outline some of the more interesting ones that I have tried.

Repeatable Environments

I’ve experimented for several years to make a simple, remote-friendly, repeatable development environment. One solution that stuck for longer than others was my dockerdevenv. It uses several layers of Docker filesystems plus a local filesystem bind to present a container that is isolated at the GOPATH level. It’s available via SSH or VNC. It was close, but VNC can be slow, and it still didn’t solve individual project isolation. I’ve also used Eclipse CHE and cloud9 – both hosted either locally or remotely – but was frustrated by the resource utilization of Che and cloud9’s pricing model.

None of these solutions felt quick or natural, though. I eventually bought an Intel NUC, loaded it up with RAM and a fast SSD, and installed Ubuntu. It became my headless development system, accessed via SSH. This solution works fairly well, especially if you’re comfortable with command-line editors like vim or emacs. It’s less fun if you want to use any UI for file editing, and the remote IP address means that any services/servers you start need to be exposed to listen on all interfaces rather than just binding to localhost, which is frequently the default.


One day I was stumbling through the Internet and I was reminded of LXD, a project from Ubuntu that adds some nice features on top of LXC. After working through a few of the tutorials, I realized that LXD had many features that made it appealing for my usecase.

  • Like Docker, LXD/LXC use container technologies to isolate processes and network
  • LXD is fast, stopping, starting, and creating containers is nearly instant if the image is present locally
  • LXD allows you to create different profiles and apply them to containers at or after creation time
  • Profiles can determine network configuration, privileges and more

An idea occurred to me: If I created a base container with my development tools setup, my dotfiles, etc. I could use LXD’s fast cloning to create a new container for every project, or for groups of projects! I experimented with this until I created a base container that was configured well for generic Go development. I named this container base.

bketelsen@arrakis:~$ lxc ls
|   NAME    |  STATE  |         IPV4         | IPV6 |    TYPE    | SNAPSHOTS |
| base      | STOPPED |                      |      | PERSISTENT | 0         |

My thinking was that when I wanted to work on a new project, I’d use the clone functionality to clone the base container then start it and work in that container using lxc attach which is similar to docker exec. This worked, but the default LXD setup creates a network bridge (just like Docker does) that assigns containers their own IP addresses which aren’t routeable outside the host computer. It also meant that any services I would start in a container would have to be exposed using IPTables hackery, which I detest (because I’m not very good at it).

I did some more research and discovered that LXD supports several different networking models for the containers on a host. One of them is macvlan which makes the host’s network adapter appear to be multiple network adapters, each with a different MAC address on the host’s LAN interface. Containers that are started in this setup are peers to the host from a networking perspective, which means they’ll get IP addresses on your LAN rather than an unrouteable subnet like the bridge was assigning.

That’s really awesome!

It took a lot of searching and tweaking to figure out how to configure things, but I ended up with an addition to my default lxc profile that looks like this:

bketelsen@arrakis:~$ lxc profile show default
<... snip ...>
    nictype: macvlan
    parent: eno1
    type: nic
<... snip ...>

This configuration change causes containers started with the default profile to share the eno1 network adapter using macvlan and get IP addresses assigned from my router. I didn’t have to use any fancy ip link or bridge-utils tools to make it work, just changed the configuration and saved it. The next container I created got an IP address on my LAN! Because I had already enabled SSH services on the base container, I could SSH into the newly created container and clone a project and start working. very cool

An added bonus of the macvlan approach is that my router registers DHCP leases in DNS. LXD uses the container name as the DHCP host name. Therefore container “gopheracademy” becomes “gopheracademy.local” on my network. From any computer in my LAN I can type ssh gopheracademy and it connects me to the container.

Remote Access

Since each container is a separate IP address on my LAN, I don’t have to do any crazy configuration on the host computer to proxy or forward traffic to the container. If I start a web server, I can access it inside the LAN at http://containername:port. This makes for really easy development. tmux means I can walk away and come back where I left off, too.

But if I’m not at home, it’s a little more complicated. I wanted to keep the same easy feeling for remote work, so I created high numbered port forwards for each of my most frequently used containers at my router. So if I’m away from home I can ssh to home.ip.address:52345 and get to the gopheracademy container via ssh. Install Fail2Ban and other security tools if you’re exposing anything on the interwebs, friends!

I also have a VPN connection I can use if I’m on a trusted computer which makes the experience the same as if I were developing locally. Since a VPN connection isn’t always possible, I like having the high-port SSH forwards as a backup.

GUI Access

Here’s where things get fun. Last week Amazon finally announced what they were doing with their cloud9 purchase. They’ve integrated it into the AWS console and enabled nice web-based development environments with EC2 backends. Cloud development with an Amazon twist. And an expensive one, if I were to create an environment for all the different projects I wanted to isolate. But reading the fine-print, the cloud9 development environments are free, and you ony pay for EC2 backend usage. BUT, you can skip the EC2 backend and connect your development environment to an SSH server.

I have some of those!

I tested it out by creating a cloud9 environment with SSH access through the high-port forwarding to one of my LXD containers. Not only did it work, but it was fast, comfortable, and pretty darn awesome.

An example workspace: AWS c9 Workspace

I setup workspaces and high-port forwards for all of my favorite projects, and now I can fall back to a browser based editor any time SSH/vim isn’t practical. And it’s free.


I couldn’t call myself a proper developer if I didn’t automate things a little bit, could I?

I created a bash script that automates all of this (except the port-forwarding, I’m too chicken to automate port-forwarding).


echo Copying Base Container to $1
lxc copy base $1
echo Starting New Container: $1
lxc start $1

I called it newdev and execute like this:

$ newdev projectname

It creates a clone of the base container named projectname and starts it. There is no Step 2.


I’m still pretty new to LXD, so there may be some administration headaches I’m not aware of. One of the biggest pain points is that the LXD host and the containers can’t communicate over the network. This is due to something about macvlan. I tried to read and understand it, but my eyes glazed over and I stopped caring after the articles used terms like 802.11qr. It’s an easy limitation to deal with, because that host server is headless anyway. So I connect directly to the containers rather than going through the host. Which is what I wanted anyway. The limitation is most annoying when I want to move files from the host into the container. I know LXD allows shared/mounted directories in the containers from the host, so I may explore that if it gets annoying enough.

Repeatable and Isolated

Each development environment is repeatable, cloned from a base image. They’re 100% isolated too. It’s like having a Virtual Machine for each development environment without the overhead in CPU and disk space for many virtual machines. LXD has almost no overhead, and the containers use an overlay filesystem so the majority of the diskspace is shared with the base image.

I have local SSH and remote web access to my development containers, and I couldn’t be happier. I hope something in this post inspires you to tweak your development workflow and try new things, you just might find something new that makes your life easier.

Brian is a Cloud Developer Advocate at Microsoft. @bketelsen on twitter, https://brianketelsen.com on the web


A few months ago, we switched over to using dep to manage dependencies for Veneur.

We were excited to discover that dep provided a straightforward workflow for a longstanding problem in Go development: managing forks of dependencies and contributions to upstream projects.

This is a common problem with dependency management. Some tools attempt to provide workflows for managing it, and in my experience, the workflow dep has chosen is the cleanest I’ve seen.

I often seen this problem manifest in one of two ways:

Scenario #1: Contributions to upstream projects

package b

import “github.com/c/d”

github.com/a/b imports github.com/c/d. In the process of developing a/b, we discover we need to extend c/d to support a new feature. We provide a pull request to the c/d project, but we don’t know how long it will take for c/d to merge our changes – it could be a few hours, or a couple of months. Because we want to begin using the new feature in a/b immediately, we have to fork the project to a repository we control and rewrite all of our imports from github.com/c/d to github.com/a/d. Because some of our dependencies also pull in c/d, we end up having to rewrite imports for projects inside our vendor directory too.

Scenario #2: Known bugs in upstream projects

github.com/a/b imports github.com/c/d. For several months, we’ve been building a/b against the c/d commit specified by the SHA prefix afadedface1. We decide to upgrade to a newer version, specified by the SHA prefix badcab2. After a few days, we notice some sporadic problems – under a rare set of conditions, a new bug introduced in badcab2 can cause a race condition, leading to silent failures. We report the bug upstream, but the fix is complicated. It will require several months for upstream maintainers to implement without causing problems for other users of c/d. We decide to revert back to afadedface1 and wait for the fix before upgrading again.

A few weeks later, another developer (“Alice”) decides to upgrade the c/d package to the latest commit - cafebead3 - in order to pull in an unrelated feature that she needs. The fix for the networking bug is still in progress, so cafebead3 still doesn’t contain the fix for the networking bug, and upgrading to cafebead3 will reintroduce the same bug that upgrading to badcab2 did. But Alice was out of the office during the week that badcab2 was tested, so she isn’t aware of the rare bug. She has no reason to believe that this change could be problematic. The file that specifies the version requirements for a/b doesn’t contain any comments or self-documentation indicating that c/d should not be upgraded. She follows the standard process for upgrading dependencies – updating the SHA in the requirements file – and in doing so, reintroduces the bug that had previously been discovered.

What went wrong?

The problem in the first scenario is that we want to create a temporary fork, and we’re using a workflow – rewriting imports – that is designed for permanent, hard forks. When we create the github.com/a/d repository, we don’t want to treat it as a different package from github.com/c/d. It’s only a separate repository because we don’t have write access to c/d, so we can’t create feature branches there directly. (If we had no intention of submitting our patch upstream, a hard fork with rewritten imports would be the right solution.)

The second scenario is more complicated. It’s a problem that arises from multiple sources: communication gaps, cross-organization development, and an inability to automatically surface information at the right times.

There’s a difference between pinning to a SHA because you know it works and pinning to a SHA because you know that other versions won’t work. Upgrading software always carries the necessary risk of unknown problems, but there’s no reason we should be exposed to the unnecessary risk of known problems. Alice was able to reintroduce the bug accidentally because the amount of friction associated with upgrading a dependency is identical whether or not known problems exist. We need a way to increase the friction of upgrading a dependency when newer versions are known to cause problems.

I’m illustrating these examples with git SHAs, but keep in mind: these same problems can happen even if a/b and b/c are using semantic version schemes. New features that are backwards-compatible can be introduced as minor or point releases, and it’s not unheard-of for minor or point releases to introduce inadvertent bugs, either. Any version management scheme that allows flexibility in versions, instead of explicitly pinning to a specific SHA, runs the risk of introducing bugs when upgrading versions. Any version management scheme that does pin to a specific SHA requires a way to document problems with versions in a seamless way.

Unfortunately, many do not. Many vendoring tools use JSON for configuration, and the JSON spec doesn’t support comments. And even when the tool supports inline documentation of requirements with comments, that still doesn’t prevent the problem from happening if someone ignores or overlooks the comment. We need to ensure that this can be caught at build-time, within our CI system.

But what about tests?

You might think, “The second scenario could have been prevented with regression testing. If they had added tests for the networking bug, Alice wouldn’t have been able to reintroduce the bug by accident”.

Adding tests is certainly a good idea, and it’s not mutually exclusive with the dep-based workflow approach. However, we can’t rely solely on tests to prevent these categories of problems. Some bugs manifest in ways that aren’t easily testable, because they require complex interactions of system state. Or, we may have identified that an upgraded dependency is causing problems before we understand the actual mechanism of the problem, so we don’t yet know which path needs to be tested.

But, most importantly, writing regression tests for the bug often requires whitebox testing, which can only be done by modifying the c/d package itself. At that point we find ourselves back in the first scenario: we’ve forked a package to add a feature (regression tests are a feature!) and we want it to be merged upstream, once the underlying bug is fixed. Even if you’re able to provide complete test coverage for your packages and all of their dependencies, you still need a dependency management workflow that facilitates this process.

Is this specific to Go?

Nope. This category of problems exists with software in general, and these are only two examples of how it can manifest. These particular examples are written to be familiar to Go programmers, but it’s not hard to construct similarly tough situations for Python, Java, Ruby, Javascript, C/C++, and so on. (Furthermore, this same problem can manifest at the OS level as well – instead of having package a/b import c/d, we could have application b linking against libcd).

So how does dep help us deal with these problems?

With dep, we can address the first scenario with a single line. In our Gopkg.toml, we can use a source entry to specify an alternate repository source for a given package. We can switch to our own fork of a project without having to rewrite any import paths.

Previously, we might have had:

name = "github.com/c/d
version = "1.0.0"

And instead, we can change this to:

name = "github.com/c/d
source = "https://github.com/myusername/d.git"
revision = "afadedface1"

When our patch is merged upstream, we just revert this commit, and we automatically switch back to using the upstream repository. Depending on the version or SHA range we originally specified in our requirements, we may not even have to update the version range this time; dep can pull in the latest compatible version.

The second scenario – accidentally reintroducing a known bug through an upgrade – is prevented as well, using this same trick. One convenient feature about Go: the source field is almost always redundant for typical use, because Go package names (and import paths) already specify source URLs. The mere presence of a source directive itself serves as a signal that something is afoot – a “proceed with extra caution” warning sign that will hopefully make Alice think twice before updating the version requested for that package.

But, even if Alice ignores or overlooks the source directive, we have another failsafe! It’s still impossible for her to accidentally update c/d to cafebead3, because that commit only exists in the upstream repository, and we’ve specified our fork as the source repository.

We achieve a good balance here – it’s still possible to upgrade the dependency if needed (if the bug is fixed, or if it’s later deemed an acceptable risk). But there’s just enough friction in this workflow that Alice is unlikely to upgrade to a buggy version by accident.


I had always been a bit confused as to how the crypto/rand package and the math/rand package were related, or how they were expected to work (together). Is this something that everyone else already grokked, or is that just my impostor syndrome talking? Well, one day I decided to see if I could defeat my ignorance, and this blog post is the result of that investigation.

The math One

If you’ve ever poked around in the math/rand package, you might agree that it presents a fairly convenient API. My favorite example is the func Intn(n int) int, a function that returns a random number within the range that you’ve given it. SUPER USEFUL!

You may be asking about the difference between the top-level functions and the functions hung off an instance of the Rand type. If you look at the source code, you’ll see that the top-level functions are just convenience wrappers that refer to a globally instantiated package value called globalRand.

There are a few gotchas when using this package, though. The basic usage only provides pseudo-random numbers, as a function of the seed. This means that if you create two Rand instances using a functionally equivalent seed, equivalent calls (in order and function) to the two instances will produce parallel outputs. (I found this concept to be personally challenging to my understanding of “random”, because I wouldn’t expect to be able to anticipate a “random” result.) If the two Rand instances are seeded with different values, the parallel behavior will not be observed.

The crypto One

Now, let’s look at crypto/rand. Okay, it’s got a nice and concise API surface. My understanding is that it defers to the underlying OS’s platform’s random generator, which intends to be non-deterministic. The only thing is: HOW THE HECK DO I USE THIS?!? I see that I can generally get byte slices of random 1’s and 0’s, but what do I do with those?!? That’s not nearly as useful as what math/rand provides, right?

Hrm. Is it possible to get the non-deterministic behavior of crypto/rand, but with the more approachable API of math/rand? Maybe the real question is: How can I combine these two wildly different packages?

Two Great Tastes That Taste Great Together

(NB: https://www.youtube.com/watch?v=DJLDF6qZUX0)

Let’s take a deeper look at the math/rand package. We instantiate a rand.Rand by providing a rand.Source. But a Source is, like almost all awesome things in Go, an interface! My spidey-sense is tingling, maybe there’s an opportunity here?

The main workhorse in rand.Source is the Int63() int64 function, which returns a non-negative int64 (i.e. the most significant bit is always a zero). The further refinement in rand.Source64 just returns a uint64 without any limitations on the most significant bit.

Whaddya say we try to create a rand.Source64, using our tools from crypto/rand? (You can follow along with this code on the Go Playground.)

First, let’s create a struct for our rand.Source64. (Also to note: since math/rand and crypto/rand would collide in usage, we’ll use mrand and crand, respectively, to distinguish between them in the following code.)

type mySrc struct{}

Let’s address the Seed(...) function from the interface. We do not need a seed for interacting with the crypto/rand, so this is just a no-op.

func (s *mySrc) Seed(seed int64) { /*no-op*/ }

Since the Uint64() function returns the “widest” value, requiring 64 bits of randomness, we’ll implement that function first. We use the tools from encoding/binary to read 8 bytes off the io.Reader provided by crypto/rand, and turn that directly in to a uint64.

func (s *mySrc) Uint64() (value uint64) {
	binary.Read(crand.Reader, binary.BigEndian, &value)
	return value

The Int63() function is similar to the Uint64() function, but we just need to make sure the most significant bit is a always a 0. That’s pretty easy to do with a quick bitmask applied to a value produced by Uint64().

func (s *mySrc) Int63() int64 {
	return int64(s.Uint64() & ^uint64(1<<63))

Great! Now we have a fully operable rand.Source64. Let’s verify it does what we need by putting it through its paces.

var src mrand.Source64
src = &mySrc{}
r := mrand.New(src)
fmt.Printf("%d\n", r.Intn(23))


Cool, so with the above code, at about a dozen lines, we have an easy-peasy way of hooking up cryptographically secure random data generation to the nice and convenient API provided by the math/rand package. However, I’ve come to learn that nothing comes for free. What might we be giving up by using this? Let’s check what happens when we benchmark this code.

(NB: I like using prime numbers in my tests, so you’ll see lots of 7919, the 1000th prime, as a parameter.)

What kind of performance do we get out of the top-level functions out of the math/rand package?

func BenchmarkGlobal(b *testing.B) {
	for n := 0; n < b.N; n++ {
		result = rand.Intn(7919)

Not bad! About 38ns/op on my laptop.

BenchmarkGlobal-4         	50000000	        37.7 ns/op

What if we create a new instance of the rand.Rand type, seeded with the current time?

func BenchmarkNative(b *testing.B) {
	random := rand.New(rand.NewSource(time.Now().UnixNano()))
	for n := 0; n < b.N; n++ {
		result = random.Intn(7919)

At ~23ns/op, that’s really good, too!

BenchmarkNative-4         	100000000	        22.7 ns/op

Now, let’s check the new seed that we wrote.

func BenchmarkCrypto(b *testing.B) {
	random := rand.New(&mySrc{})
	for n := 0; n < b.N; n++ {
		result = random.Intn(7919)

Oof, at ~900ns/op this clocks in at an at least an order of magnitude more expensive. Is it something we did incorrectly in the code? Or is this maybe the “cost of doing business” with crypto/rand?

BenchmarkCrypto-4     	 2000000	       867 ns/op

Let’s build a test to see how long just reading from crypto/rand takes in isolation.

func BenchmarkCryptoRead(b *testing.B) {
	buffer := make([]byte, 8)
	for n := 0; n < b.N; n++ {
		result, _ = crand.Read(buffer)

Okay, the results show that the vast majority of time spent in our new tool comes from the underlying cost of interacting with the crypto/rand package.

BenchmarkCryptoRead-4     	 2000000	       735 ns/op

I don’t know that there’s a lot we can do to mitigate this. Besides, maybe a routine that runs in ~1 millisecond to get non-deterministic random numbers is not a problem for your use case. That’s something you’ll need to evaluate for yourself.

Another Tack?

One of the usages of randomization that I am most familiar with is in exponential backoff tools. The idea is to reduce the chances of accidental synchronization when reconnecting to a stressed server, because pulsed loads might be detrimental to that server’s recovery. “Deterministic random” behavior itself is not a problem in these scenarios, but using the same seed across a bunch of instances can be problematic.

And this is a problem when defaulting to the top-level math/rand functions (which are implicitly seeded with 1), or by using the frequently observed pattern of seeding with time.Now().UnixNano(). If your services happen to come up at the same time, you just might end up in accidental synchronization with respect to the deterministic random output.

How about we use our powers of crypto/rand at instantiation time to seed the math/rand tools, after which we can still enjoy the performance benefits of using the deterministic random tools?

func NewCryptoSeededSource() mrand.Source {
	var seed int64
	binary.Read(crand.Reader, binary.BigEndian, &seed)
	return mrand.NewSource(seed)

We can run benchmarks on this new code, but we already know we’ll just be falling back to deterministic random performance characteristics.

func BenchmarkSeed(b *testing.B) {
	random := mrand.New(NewCryptoSeededSource())
	for n := 0; n < b.N; n++ {
		result = random.Intn(7919)

And now we’ve proven our assumptions were right.

BenchmarkSeed-4           	50000000	        23.9 ns/op

About the Author

Hi, I’m Nelz Carpentier. I’m a Senior Software Engineer at Orion Labs in San Francisco. I’ve been writing Go for about 3 years now, and upon familiarization it quickly became one of my favorite languages.

Disclaimers: I am neither a security expert, nor an expert on crypto/rand implementations across platforms; you might want to consult with your local security expert if you use these tools in mission-critical security use cases.

You can find a distillation of these examples here. It has an Apache 2.0 License, so feel free to slice, dice, and/or borrow whatever you need from it!

This post is a slightly edited version of my November presentation to the San Francisco chapter of Papers We Love. The paper I have chosen tonight is a retrospective on a computer design. It is one of a series of papers by Gordon Bell, and various co-authors, spanning the design, growth, and eventual replacement of the companies iconic […]