Jamie Gaskins

Ruby/Rails developer, coffee addict

Perpetuity PostgreSQL Adapter Coming Soon

Sep 18, 2013 @ 12:00am

For those unfamiliar, Perpetuity is an object-relational mapper that follows Martin Fowler's Data Mapper pattern. It is the first implementation of this pattern in Ruby.

Now that Perpetuity's API has been stabilized somewhat, I've been working on a PostgreSQL adapter. This has been the #1 feature request since I began working on it. I don't agree with the usual reasons behind this request, but I don't think it's an unreasonable one. You can't always control what DB you get to use and at least you'll be able to use Perpetuity if you can't use MongoDB in production.

The first thing this made me think of was that there's absolutely no point in keeping the dependency on the moped gem if you're not going to use it, but adding that dependency manually in your app's Gemfile seems unnecessary. Ideally, you shouldn't have to think about your dependencies' dependencies. Obviously, sometimes you do have to think about them, but I try to keep things as close to ideal as I reasonably can.

While discussing this with Kevin Sjöberg in a few of our pairing sessions (he's paired with me multiple times on Perpetuity and has given some very valuable feedback on a lot of it), we discussed separating the adapters into perpetuity-mongodb and perpetuity-postgresql gems. This seems like the best idea so far.

These gems will have perpetuity as a dependency so that you can simply put the adapter gem into your Gemfile and get both the adapter and the main gem, similar to how rspec-rails depends on rspec. This also allows for plugin-style database adapters, allowing Perpetuity to talk to other databases without itself knowing about every available one.

What Even Is a Fake Geek Girl?

Sep 02, 2013 @ 12:00am

I've never understood the whole "fake geek girl" thing. Every time a woman does something geeky, claims to be a geek, or otherwise displays some form of geekiness, there's at least one man around that cannot wrap his mind around the fact that you can be a geek without a Y chromosome. This man will generally quiz this woman on her knowledge of geek things such as stereotypically geeky books like Lord of the Rings, some comic book or even a TV show or movie like Star Trek or Star Wars. When she falters on one of these questions (which is extremely likely because the man is intentionally trying to trip her up by asking obscure things), he labels her as a "fake geek girl".

The fallacy here is that this hypothetical (and all too often very real) man believes that the pop quiz he's giving this hypothetical (and, again, often very real) woman is about topics and ideas common to all geeks. Let me be perfectly clear about this: there is nothing common to all of us other than our unwavering enthusiasm for something.

Here's a list of things that would get me labeled as a fake geek if my gender were different:

  • I've never read LOTR. I read hardly any fiction at all, actually.
  • I read a few X-Men comic books on the rare occasion I could get one but I didn't read any others. No Avengers, Incredible Hulk, Spider-Man, Batman, Superman, etc. I learned about most of these from the after-school cartoons instead.
  • I never got into D&D. I played a few games of it but I couldn't tell you what edition I like best because I don't know the difference.
  • I've never played Settlers of Catan or any game like it. Ever. I played Monopoly.
  • I didn't get into Star Wars until I was in my 20s. I even liked Episode II because I thought the idea of Yoda jumping around with a lightsaber was awesome because I never expected it and only recently learned that it pissed off a lot of people who are really into the lore.
  • I love Star Trek: The Next Generation, Voyager and Deep Space Nine but I never really got into The Original Series.

These would all cause a lot of geek heart attacks, but nobody calls me on them or any of a dozen other "geek" things they grill women on simply because I'm a guy? Honestly, I think most geek guys would fail quite a few of these and I think a lot of them wear the geek label because it's the cool thing these days.

The reason I was considered a geek growing up was because 20 years ago, to own and be able to operate a computer as a teenager was still considered super geeky. I was lucky enough to be introduced to computers at a very young age (we had one of the original 1984 Macs) and I fell in love with it. It did things when I told it to. If it screwed up, it was because I screwed up in telling it what to do. It was pure logic and it made so much sense to me.

When I was 12, I found out I could write my own software for it. I was no longer limited to software written by other people. And if knowing how to operate a computer was geeky, being able to write my own software was super geeky. But that's really the only thing that made me a geek. I would be labeled an outsider in a heartbeat if they quizzed me like they do to women.

Sure, 20 years ago "geek culture" was pretty much nothing but men. I don't mean to say that there weren't geek women, but gender roles were much more mainstream back then. I think that's where this whole stigma comes from.

Today, I ran across this image on Twitter, which I felt was entirely justified:

Who's the fake geek now?

There are plenty of "geeky things" enjoyed by a lot of women that most self-proclaimed geek guys will never know about, as evidenced in the image above. My girlfriend is probably a lot geekier than I am. She's read all the things, she plays all the latest video games, and she is more in touch with "geek culture" than I am. I just follow a bunch of programmers on Twitter and get my geek news from retweets. Hell, that's how I found out that the whole "fake geek girl" crap was even a thing. I had no idea that people were accusing women of this simply because they're women.

Geek culture is much cooler now than it was back then and this will obviously attract all kinds of people. This isn't a bad thing. Geek culture isn't an exclusive club. We were all labeled geeks growing up because we didn't fit into someone else's elitist clique. Let's not be those fuckers. So, you want to be a geek? By all means …

Geek out with your beak out!

Don't call yourself a geek because it's the cool thing, but do it because you love something nearly to the point of obsession. Do it because you're amazingly talented at something.

The reason sci-fi and fantasy worlds are considered geeky is because they were not generally accepted by the mainstream (whatever that means). By that definition, a lot of other quirks and communities are considered geeky, as well (this is a very incomplete list):

  • LGBTQ — the LGB part of it is becoming more accepted as time goes on, but there's still a long way to go there and transgender people still confuse the shit out of most of the population.
  • Cosplay — they love dressing up as their favorite characters and aren't afraid to show it to everyone.
  • BDSM — it's like sci-fi used to be, nearly everyone's interested in it in some way but few will admit it publicly.

Don't ever let anyone tell you you're not good enough to fit in here. Anyone who tells you that doesn't realize that a lot of us are geeks because it's the only place everyone fits in.

Why I Like Developing with MongoDB

Jan 28, 2013 @ 12:00am

MongoDB is a document-oriented database. When I say "document", I don't mean the Microsoft Office variety. Specifically, it stores BSON documents. BSON is a form of JSON, but rather than JSON's text representation, BSON is stored in binary form for efficiency. For the purposes here, BSON and JSON may be used interchangeably. Keep in mind that they are mostly equivalent, just stored in different form.

Some people hate MongoDB

There is a lot of MongoDB hate. A lot. I'm not going to go into examples here, but a lot of people have historically lost data with MongoDB due to not really knowing how to configure it properly. This is probably also a fault of the authors for not making it painfully obvious how to configure the database server/cluster for their purposes.

The problem comes from the database defaults being tuned for performance. This gives it excellent benchmarks and in a single-server installation this is fine, but makes durability across a cluster an issue. However, a cluster can be tuned for durability. I won't go into that here, though, because this article isn't about configuring MongoDB.

The only reasonable complaint I've personally seen is from people losing data after upgrading their MongoDB installation. This is bad, but as with any upgrade, you should backup your data first. Importing afterward is pretty straightforward.

Why I love MongoDB

When it comes to programming, there are a lot of reasons to choose one language or framework over another. I choose Ruby because, even with all of its drawbacks, it still conforms to my tastes better than any other language I've used. One of the core philosophies of Rails (besides "do what DHH feels like") is that minor details get out of the way and let you focus on building web apps. This is why we don't have to think about things like CSRF protection and HTTP headers/requests/responses except in special cases. I love that I don't have to defend against CSRF in every POST request or even think about HTTP at all in the vast majority of my controllers.

MongoDB shows similar qualities to both Ruby and Rails and that's what I love about developing applications with it.

Flexibility of data types

For the most part, JSON values are straightforward. A value:

  • surrounded by quotes is a string
  • surrounded by square brackets is an array/list
    • Each value in the array can also be of any type
  • surrounded by curly braces is a JSON object/hash/map/dictionary with keys and values (Note: BSON has a couple restrictions on keys)
  • without any decoration is numeric (or a variable reference, if supported by whatever you're doing with JSON)

There is no type declaration for your data. You don't tell the DB that all "email" attributes have to be a string. They don't even all have to be the same type. If you want your values to be numeric in some, strings in others, and objects in others, you can do that.

SQL databases, on the other hand, are pretty inflexible, which is annoying in development. Every field in the same column of every row has to have the same type — I realize that this is fine for most cases, but there are times when that's infeasible. One reason I develop in Ruby is so that I'm not constrained by types. Every object can be any type of object.

Databases are used primarily to store the state of an object, so if an object can hold different types of data in the same attribute, I should be able to store that as such in the database. I might have a legitimate reason to store strings and numeric values in the same field — and storing every single value as a string, then converting back to integers/floats may not be what I want to do.


The flexibility of data types can cause trouble with existing data if you decide to change things down the line. If, for example, one of your classes decides to assume that one of its objects' attributes will be stored as strings when it's been nothing but numerics so far, you'll need to ensure that this is true.

# Using Perpetuity
my_objects = Perpetuity[MyClass].all
my_objects.each do |object|
  object.my_attribute = object.my_attribute.to_s
  Perpetuity[MyClass].save object

The only way I could find to do this was to update each document individually. I was hoping I could pass a JS function to the update, which would let me run the update in a single query, but I couldn't figure out a way to do that. If anyone knows if this is possible, tweet at me.

Flexibility of structure

In a SQL DB, every time you add a new data attribute to an object that needs to be persisted, you need to add a column to the DB. In apps with large amounts of data, this can cause downtime, which can cost you money. In development, this stops the developer's momentum while she runs a migration. If that data changes for any reason, that's another migration.

The single best thing from a developer's point of view is that adding an attribute to a MongoDB collection is that it's as simple as adding the key to the document. There is no ALTER TABLE. You just pretend it was there all along. You can treat documents without that key as having nil as that attribute's value (including in queries). This is the default state of any instance variable or hash lookup in Ruby anyway.

Some people claim this is actually a weakness, that it can hide bugs in your code, that a rigid structure will raise exceptions when you try to give it an invalid attribute. That last part is true, but I have my doubts about it hiding bugs in your code. I guess it depends on how these documents are generated. For example, in Perpetuity, all BSON documents are generated from object state. The only way you can put the wrong data into your database is if your objects are storing things in the wrong instance variables or your mappers are serializing the wrong attributes, which means your testing could use some improvement.

Some also claim that it's a weakness because it bloats your data — every document has to explicitly specify which attributes hold which values (whereas in a SQL database, this is determined by the value's position in the row). This is true, but that's the cost of flexibility. SQL databases aren't exempt from this type of overhead, though. Every NULL field in a SQL row carries extra cost, as well (though arguably not as much, depending on the column type), whereas document databases can simply leave that attribute out. It's definitely a trade-off, but I can't imagine it'd make or break most applications. If keys are a significant portion of your documents' size and data size is an issue in your application, maybe a document database isn't the best use case for you.

The last justification is completely outside the scope of this article because I'm aiming for a developer-happiness perspective and data size means sweet frak-all in that light, but I figure someone that reads this would probably mention it.

It plays along with whatever I do

When you start developing on an existing Ruby on Rails application backed by a SQL database, you have to:

  1. create the database
  2. ensure the DBMS you're using for development has the right user account on it (for example, "root" with no password in MySQL) and configure your app to use that
  3. load your schema
  4. check Twitter
  5. write code that talks to the database

When you start working with an existing app backed by MongoDB, you:

  1. write code that talks to the database
  2. there is no step 2

It creates the DB on the fly. It defaults to no authentication. If you write to a collection that doesn't exist, it creates that, too. You get to stop worrying about the details and focus on the stuff that matters.

If you're logged into a PostgreSQL server as a user that has permission to create databases and you try to access a database that doesn't exist, why is the response "it doesn't exist"? I can't imagine a situation where I'm trying to talk to a database that isn't there and an error is the best result (unless the DB can't be created). Why do I have to make my intent explicit? When I say "talk to this database", it pretty much implies that I want to talk to it unless there is no way you can possibly let me talk to it, such as a disk error, network error or insufficient permissions.

I'm not saying there aren't plenty of times you want things to be explicit in programming. There are a lot of cases where being explicit is superior. This is not one of those times.


Maybe MongoDB isn't right for your particular use case because your app has requirements that are more important than developer happiness. Maybe your ops person/team doesn't have enough experience with MongoDB to keep it in their toolbelt. Maybe you need a graph database or table joins or transactions. But for most apps, I use MongoDB because I find it more fun to work with; this keeps me motivated and helps me work quickly.

Get Rid of ‘new’ and ‘edit’

Jul 11, 2012 @ 12:00am

In Rails, as well as several other "RESTful" web frameworks in various languages, provide 7 combinarions of URLs and HTTP verbs for accessing resources inside the app. In Rails, they are:

  • GET resources
  • POST resources
  • GET resources/:id
  • PUT resources/:id or PATCH resources/:id
  • DELETE resources/:id
  • GET resources/new
  • GET resources/:id/edit

The first two deal with collections of resources. GET retrieves the collection and POST adds to it. The next three deal with retrieving, modifying and removing an individual resource. The last two serve HTML forms for the user to generate or modify a resource.

Let me repeat that: the last two serve HTML forms. They do not actually interact with the resource at all. We put an HTML concern at the HTTP level.

My problem isn't necessarily that this happens at all. Surely, there are some things that make sense to have a page dedicated to them. But why do we do this by default? The default for Rails is to generate the new and edit form pages. But they're not really interacting with the resource. They're not "views". They're pages.

Let's take it out of the browser for a minute and into the realm of the native app. If you were going to create a new item for a list, even if this list were stored on remote hardware, would you ask that remote machine how to let the user enter the information for that? Hell no! You'd display a form you already had prepared. If you wanted to edit an item, would you ask the server for the item's details? Why would you do that? If you can see an item to tell the application to edit it, you already have its information. The only reason you'd open a connection to the server at all would be to ensure you had the most up-to-date version of that resource.

We can do the same thing in the browser. In the simplest case, we can provide the "new" HTML form inside the index view and the "edit" form inside the show view. With web clients getting thicker by the day like the dude from Super-Size Me, we don't even have to render separate "new" and "edit" forms. Render the same form for both, but pull the data from the show view into the form. Sure, Rails makes it so we don't even have to care about populating the form, but this is an example.

To be clear, I'm not ranting. The fact that we add two additional pages to each resource by default does bother me, but only about as much as, say, the effects of continental drift on the field of cartography. It's more the fact that we're leaning on the wrong thing for no other reason than "that's the way we've always done it" and I think we can come up with better ways to do it.

Ruby Warts

May 16, 2012 @ 12:00am

I love Ruby. I've been developing in it as a hobby for 8 years now and professionally for 3. It is still my favorite language to do absolutely anything. But even with that in mind, it has some things about it I'm not sure I like.

Ordered parameters

Note: I use the words "argument" and "parameter" interchangeably here.

This is an implementation detail that frustrates me to no end. Every time you add a new parameter to a method, you either have to add it onto the end or you break its interface. This is an easier thing to do when you're first creating that method, but if it's being used in the wild, changing its interface will piss off a lot of people.

Additionally, default values for parameters have to come at the end. You can't have an optional param with a required one after it. Here is an example of how Ruby works around that. The first parameter can either be the separator or the limit, based entirely on its class. This is a horrible idea. There are very few good reasons to ask for an object's class in Ruby and branching on behavior isn't one of them.

Ordered parameters are a relic of systems programming languages like C, where arguments must be in a certain order because they are then pushed onto the stack for the function being called. We have no reason to keep ordered params other than "that's the way we've always done it".

Granted, for single-arg methods, it's very, very convenient not to have to write the parameter's name. For example, array.find(value) is awesome, and if there's no ambiguity, it's perfectly reasonable to go without naming.

Rails solves this by using a hash of arguments for most of its arguments and this has now become a common Ruby idiom because of it. This is a bandaid solution because we end up having to get the values of these arguments manually from the hash. It's an improvement for methods that take many arguments, but it could be improved by adding support directly into the language.

Cannot override shortcut operators

In Ruby, you can override damn near anything. Operators like && and ||, however, are stubborn. Their functionality is hard-coded. Implementing them as methods on the object would be trivial:

class BasicObject
  def && other
    self ? other : self

  def || other
    self ? self : other

That would provide the short-circuit behavior of those operators and allow for overriding. This isn't a major issue and there are very few reasons you'd ever want to override them. I only came across it while trying to create an Array-like interface for Perpetuity's mapper-query syntax (something like ArticleMapper.select { |article| article.published: true && article.author_name == 'Jamie' } to feel more like Ruby and less like ActiveRecord).

Methods with questionable return values

I mentioned above the IO#readlines method. Its return value also frustrates me; it's an array of lines with each line containing its separator. Surely there are times when you would want to keep the separator, but I have yet to come across one in 8 years of Ruby (and even in Perl, gets calls were almost universally chomped). The general case is that you just want the line's content. You could even provide an additional argument to keep the line separator if you like.

When I first saw the method, I figured that it would be shorthand for io.read.split(separator) (IO#read slurps in the entire file to a string), but in reality it's implemented as a loop of IO#gets which appends to an array. After benchmarking, I found that io.read.split(separator) was faster, but it's impossible to get it to return the current implementation's return value including the separators. String#split doesn't have functionality for including the separator. These methods should do the same thing, even if they do it on different types of objects. Least astonishment was violated here.

Syntax-related stuff

The syntax is mostly awesome, but there are a few things I think could be improved.


Consider the following:

names.split(", ")

Why do we surround params with parentheses? I know we can omit them the majority of the time Seattle-style, and this is my preference, but why do they even go there? Personally, in my Ruby code, the only reason I don't write (names.split ", ") is that it would raise the hackles of most Ruby devs. To me, that makes more sense. You're using names.split ", " as a single value, so why not enclose the whole thing if you need parens?

Dot as a message indicator

This actually doesn't bother me at all, but I wonder about other possibilities. When sending messages to Ruby objects (e.g. calling methods on them), we use the dot to indicate that the token following the object is the name of that message. Smalltalk, Objective-C and Fancy (the latter two being derivatives of the former) all use a space to denote a message.

This would make something like RSpec fun:

names = UserRepository all map &:name
names should include: 'Foo Bar'

I haven't really thought this part all the way through yet, but clearly we'd need something to signify that &:name and include: 'Foo Bar' aren't messages but parameters. Maybe the dot was chosen to make this easy. Matz did list Smalltalk as one of his inspirations for Ruby, so I've been curious about his choice of the dot as a message token.

What do I plan on doing about it?

I don't have all the answers and some changes I'd like to make to Ruby definitely require changes to other pieces of the language. I'm also not confident that my pet peeves or desired features of Ruby would happen any time soon even if I posted them to Ruby's issue tracker. Matz explained that he does not want to make any backwards-incompatible changes in Ruby 2.0. I don't personally agree with that, since top-level version changes are the perfect time to make revolutionary changes (not to mention, there were a few incompatible changes from 1.9.2 -> 1.9.3). I think Matz is great and I'm ridiculously thankful that he brought this awesome language to life, but I disagree with quite a few of his decisions about Ruby. And that's okay; not everyone has to agree on everything.

However, I've been toying with the idea of building my own language on the Rubinius VM, the way that Christopher Bertels did with Fancy. I'm not sure I'm ready to do that yet, but I think it'd be a great academic exercise as well as allowing me to scratch my own itches. One nice thing about using Rubinius's VM would be to be able to call out to Ruby code in if I need to (similar to how you can call Ruby from Fancy or Java from JRuby), which would help give me a push start in the implementation.

I still love Ruby

In spite of all its warts and identity crises, Ruby is still my favorite programming language of all time out of the 20ish that I've used. I've been using it since before I knew what Rails was (possibly before it existed) and I'm sure I'll be using it for a long time yet. Between the language, the culture and the community, I can't imagine anything replacing it in my heart or my work.