Jamie Gaskins

Ruby/Rails developer, coffee addict

Ask customers for information you actually want

Apr 19, 2014 @ 06:15pm

I came across this tweet from Kurtis Rainbolt Greene:

Tweet about names

If you've ever interacted with a form on any website, you've seen registration forms that ask for things like first and last name as well as your gender. The form asks for first and last name separately so the app can address you by your first name and gender so it can use the "correct" pronouns.

"Correct"

I put "correct" in quotation marks because as much as companies and programmers would like to believe we're doing the right thing by addressing people by their first names or using masculine pronouns just because someone said they were male, what we're actually doing is pigeonholing users into our narrow view of the world.

Hello, #{first_name}!

Patrick McKenzie wrote a blog post a while back entitled The Falsehoods Programmers Believe About Names that I'm not going to attempt to duplicate here. It's pretty much the best article ever about how everything you think you know about names is wrong.

The TL;DR is that names in various parts of the world don't fit our Firstname Lastname convention. In fact, there may be more names in the world don't fit our convention than do.

Additionally, in many cultures, a business addressing a customer by their first name is considered too informal and can insult them or at least make them uncomfortable. Either way, you've lost that person as a customer.

EDIT: Kurtis also recommended that I link to the W3C article titled Personal names around the world. It's a fantastic article that provides background for a lot of the falsehoods listed in Patrick McKenzie's article linked above. It's fairly long, but so worth it.

Download gender binaries or compile from source

When you use the words "male" and "female", you're not referring to gender at all. If you are, you're using the wrong words. Rather, male/female refers to a person at the biological level, which is a weird fucking thing to ask about in your registration form for your new social app. Katrina Owen and I discussed this in a Ruby Rogues Parley thread once and it's really stuck with me since then.

The gist of it is: We're not animals. We're civilized (more or less) people that have layers of abstractions on top of our biology. We're not "male" or "female", we're "men" and "women".

And this brings me to my next point: even if you went with man/woman rather than male/female, you're still not covering all the bases. There are a lot of people in this world who don't fit the man/woman archetypes (regardless of whether that matches their biology). My long-time friend and current roommate is one shining example of this. This friend (who will remain nameless until I get their permission to use their name) struggles pretty constantly with their gender identity. They are neither masculine nor feminine (until a cat walks in the room, then they become Agnes from Despicable Me), but have to pick one very often. And a lot of times, people in the same situation feel excluded because the world wasn't designed to work for them the way it has been for everyone else.

Basically, we're making people feel like shit just so we can figure out whether to use "him" or "her" when referring to them. That is literally the only reason. Even if you don't care how your customer feels, this hurts your bottom line by encouraging them to find a competitor that does cater to them.

Okay, so my shit's broke. Please to tell me how to fix?

We ask people for first and last name so we can infer how to address them. We ask for their gender so we can infer which pronouns to use for them.

Both of these are wrong. The thing they have in common (well, besides being wrong) is that both are used to infer something about them. Rather than trying to guess details based on metadata, ask for the information you want directly.

That is, don't chop up their name to guess at how to address them. Ask them how you should address them.

Don't use third-person pronouns based on a multiple-choice gender response. Ask them what pronouns to use when referring to them. And don't limit pronouns to him/her. Don't even limit it to him/her/them. If you want to be truly inclusive and respectful, get a freeform response for every single pronoun they want used:

  • Subject pronoun (he/she/they)
  • Object pronoun (him/her/them)
  • Possessive pronoun (his/her/their)
  • Possessive gerunds (his/hers/theirs)

The examples in parentheses are just that: examples. Let your customer enter a freeform response in case the pronouns they want aren't there (trust me, cisgender people, there are pronouns transgender people use that you haven't heard of).

I've mentioned this to a few people before and almost universally, their knee-jerk reaction was "But using male/female is so much easier!" This is simply not true. It's slightly (seriously, only slightly) easier for building the form, sure, but then you have to use a function or helper all over your app to select the "correct" pronouns every time you need one.

#{user.name} updated #{possessive_pronoun_for(user)} profile.

But wait, here's the one with user-specified pronouns:

#{user.name} updated #{user.possessive_pronoun} profile.

Since user.possessive_pronoun is a simple reader method and doesn't check anything else, there's no logic that needs to be tested (you are doing TDD, right?). You're just getting data from the model.

Besides, if you put in that little bit of extra effort to offer proper pronoun support, you're very likely to have a lot of very happy customers who will feel included rather than looking at your registration form and saying "Oh, another fucking 'male/female' select box." They'll tell their friends that your service is inclusive and it will forever be known as "the service that doesn't hate marginalized people".

No More Octopress

Feb 16, 2014 @ 04:01am

I've been using Octopress for a while to serve my blog and as much as I like the fact that it handles a lot of things for me, I didn't like publishing with it. Posting an article took longer than it needed to because I had to:

  • leave the browser
  • open a terminal
  • run a rake task (which means making sure I'm using the right Ruby version and gemset)
  • open up my editor
  • write my article
  • run rake preview and check it in the browser to make sure it doesn't look like crap
  • git add --all
  • git commit -m 'blah blah blah'
  • git push heroku master

It's a hassle to do this and if you wanted to edit an article later, you had to go through that entire process again. This is the reason I've mostly ignored the blog. I'll very likely be posting more often now that it can all be done from within the browser without all those context switches (editor, terminal, browser).

Powered by Perpetuity

The ORM used to manage all of my database queries is Perpetuity, the Ruby Data Mapper-pattern ORM that I wrote. This was the fun part, actually. I got to play around and tease out a few patterns that might be useful. I even used the Postgres adapter I've been working on.

Notice the id formats of the articles: for this particular article the id is no-more-octopress. This is the actual id of the article in the database and the article model itself doesn't even know about it. This was so simple to do with Perpetuity it made perfect sense to use here:

Perpetuity.generate_mapper_for Article do
  id(String) { title.downcase.gsub(/\W+/, '-') }
  attribute :title, type: String
  # ...
end

The id(String) { ... } DSL call lets the database know the generated value is a String and will be generated by executing the given block against the object when it is inserted.

This way, I've got the SEO-friendly URLs without having to resort to putting slugs on the model. HTTP concerns shouldn't be on your model.

Markdown format

The format of the articles is Markdown, just like it was in Octopress, and they're converted to HTML on the fly. I also threw in a little caching so it's not running that conversion on every page load.

RSS/Atom Feed

To be honest, I didn't even check to see if there were any decent Ruby RSS-feed generators, but I figured that most of them would be built to work with ActiveRecord and similar libraries so I have my doubts that they'd work with PORO articles anyway. So I built my own. It wasn't all that difficult, really. It's just an object that you pass the values to and its to_xml method spits out the XML.

I'll probably extract it at some point but I just wanted to get a feed working. My blog has a few RSS subscribers and I didn't want to break that for them, although the articles will probably appear to have been updated.

Maybe I'm wrong and existing RSS generators would work, but I'm not all that concerned with it. This was probably one of the most fun parts of building this blog engine (even though it was XML) because it's something I'd never done before.

Payload size

On Octopress, my blog index payload was about 350kB. Now, I'm at around 50-60kB, including assets. This is a HUGE win for viewing on mobile devices (though I don't have mobile-friendly styles yet; I'm working on that). I did this by:

  • Removing social buttons and replacing them with links
  • Writing only the CSS styles I needed on semantic markup
  • Using only jQuery, Rails UJS, and Highlight.js for syntax highlighting on code examples
  • Serving minified and gzipped assets
  • Running the HTML through Rack::Deflater to reduce the HTML payload

Moved off of Heroku

This doesn't have anything to do with abandoning Octopress, but I also moved off of Heroku. This decision was primarily because the startup time of a Rails app on a sleepy Heroku dyno would have a severe impact on SEO. Adding a second dyno would keep it from going to sleep, but I'm not going to pay $35/month to host a blog. It's entirely possible that RSS subscribers would hit it often enough to keep it from going to sleep at all, but I wanted to try setting up a VPS to handle this anyway. It's been a while since I've had to do any ops stuff.

This blog is now running on my $5/month DigitalOcean droplet. This was a droplet I already had running, so the monthly fee isn't additional. My benchmarks show that this droplet far outperforms a Heroku dyno, too, so hopefully this improves SEO a bit — especially considering the payload benefits I mentioned above.

Le Fin

All in all, this was fun. I'd built several toy blog apps before and even Perpetuity's spec suite uses an Article object in damn near every example (they have a diverse range of attribute types: String title/body, Fixnum views, Time timestamps, etc), but actually deploying an app that uses Perpetuity's Postgres adapter was awesome to see.

I've still got some styling work to do, too, so that'll be fun. I'm not a designer, so if someone wants to contribute a design (even just a concept graphic), I'd be forever grateful. :-)

Clarifications on Jason Swett's Perpetuity Article

Feb 16, 2014 @ 03:35am

Jason Swett published an article about starting out on a Rails application using the perpetuity gem, and I'd like to address a few points he brought up in that article. I've been meaning to write this since I read his article, but time seemed to slip away from me.

Jason contacted me back in November about wanting to write his article and wanted to make sure he got everything right. Between the fact that I was somewhat unavailable to answer questions (I was in rural Louisiana with horrifically slow internet, even on my phone) and his really short deadline, which was over 6 weeks before it was finally published, there are a few things in the article that were either slightly inaccurate or outdated and I wanted to correct them.

You won't be able to take this example to production since Perpetuity's PostgreSQL adapter doesn't yet fully support retrieval or updating

This is just a timing issue. When Jason contacted me, Perpetuity::Postgres did not support retrieval unless your attributes were all strings (this means his Article example actually would've worked; it was just strings) and did not support updating at all. However, this was something I discussed with Jason over a month before his article was released and the Postgres adapter did completely support both retrieval and updating when he published it.

First let's create the project, which I'm calling journal. (The -T is to skip Test::Unit.)

rails new journal -T -d postgresql

Rails apps using Perpetuity should also include the -O command to skip ActiveRecord (the O stands for ORM). The -d postgresql option is unnecessary since pg is already a dependency of perpetuity-postgres.

This does change a few things, though. For example, Jason relied on the rake db:create command provided by ActiveRecord. Just as he mentioned in the article about how Perpetuity::Postgres creates tables automatically, it will also create the database if it doesn't exist.

You also won't get the rails dbconsole command without a database.yml file, but I'll see about providing something for that, as well.

You'll only need to add two gems to the Gemfile: Perpetuity itself and the Perpetuity PostgreSQL adapter.

His Gemfile looked like this:

gem 'perpetuity',          git: 'git://github.com/jgaskins/perpetuity.git',          ref: '82cad54d7226ad17ce25d74c751faf8f2c2c4eb2'
gem 'perpetuity-postgres', git: 'git://github.com/jgaskins/perpetuity-postgres.git', ref: 'c167d338edc05da582ff3856e86f7fb7693df0bb'

You only need to declare perpetuity-postgres in the Gemfile. There are no :git/:github requirements since there have been a few versions released. The core perpetuity gem is a dependency of perpetuity-postgres, so it does not need to be added to the Gemfile (similarly to how rspec-rails brings in the rspec gem) unless you want to specify a specific version or git ref as he did here.

TL;DR: This is all you need to add to your Gemfile:

gem 'perpetuity-postgres'

Interestingly, it created a table. You might be thinking, "Oh, yeah...we never did any migrations." Apparently Perpetuity takes care of creating your tables for you based on the mappers you define. This feels a little weird to me.

I'm sure it does feel weird. After working with tools that force you to do this manually, this is very likely to cause a few people to do a double take. Tables are created automatically mostly because running an additional command to do that is unnecessary. It's also because damn near every time I have to add a model in an application that uses ActiveRecord, this is what happens:

  1. Run tests. uninitialized constant Foo
  2. rails g model Foo bar baz quux:integer
  3. Run tests. table "foos" does not exist
  4. Swear loudly
  5. rake db:migrate
  6. Run tests. table "foos" does not exist

At this point, I may or may not have realized that I forgot to add db:test:prepare after db:migrate. If not, that's another 10 minutes or so of lost productivity while I check the database and make sure I'm not insane.

Perpetuity::Postgres will also add columns automatically when you add attributes to the mapper. This allows your app to have a deploy without downtime if there need to be changes made to the DB schema. When you deploy an ActiveRecord app to Heroku, you have to wait run the migrations after the deploy because your app won't have the updated migrations until then. Depending on the size of your app, this could cause problems because heroku run rake db:migrate has to load your entire application on an underpowered EC2 instance. In large apps, this leads to 30 seconds or more (some apps take over a minute) with your app already running and that table or column being unavailable. If you're getting constant traffic, this is probably unacceptable.

It feels like a bit of magic, but all we're doing is rescuing an exception (raised by the Postgres driver), using information already provided in the mapper to generate a table and then retrying the code that raised the exception.

I prefer snake_case table names over CamelCase, but I won't look this gift horse in the mouth. I'm just glad it worked.

I used the unmodified class names as table names because the DB is a detail, so the format of the table name is inconsequential in the majority of cases. I'm sure this borders on violating Least Astonishment, but if you're looking for the articles table and you find Article, I'd say that's a pretty easy pill to swallow. Feel free to let me know of any reasons you might disagree.

That's not to say that there isn't a problem here, though. I haven't yet put in a way to customize the table name, so if you're dealing with legacy data, you'd have to adapt the table to Perpetuity, which goes completely against the purpose of a Data Mapper. Martin Fowler even mentions in PoEAA that the Data Mapper pattern is great for when your data and your domain don't necessarily match. Everything that implies will take a fair bit longer to implement, but I would like to be able to customize the table names sooner rather than later.

And it does have the right attributes...kind of. It seems like a Ruby String should map to character varying(255) the way it does in ActiveRecord, but again, whatever. I understand that Perpetuity's PostgreSQL adapter is a work in progress.

This actually isn't a work-in-progress issue. It was a deliberate choice. Strings map to the text type rather than varchar(n) because, in Postgres, there is no difference between the two. If you want to limit a string to 255 characters, this should be done in your domain model, at least until I get constraints setup in the Postgres adapter.

Thank you, Jason

All in all, Jason's article was a great initial introduction to using Perpetuity with Rails and, as far as I know, the only one in existence that I didn't write. I'm thrilled that he felt that something I created was worth writing about in such a visible medium and I really do appreciate his help in spreading the word about Perpetuity, Ruby Object Mapper, and the Data Mapper pattern to the Ruby community. It's about time Rubyists had choices in ORMs that weren't Active Record implementations.

I'll be working on some blog posts and screencasts soon that will dive a lot deeper into creating a Rails app with Perpetuity, including some nice idioms that I've been able to tease out of its usage. I'm also working on a documentation website that will have all of this information on it.

Perpetuity 1.0.0.beta Released

Dec 15, 2013 @ 12:00am

After what feels like way too long, I've finally released a 1.0 beta of Perpetuity. For those unfamiliar, Perpetuity is an implementation of the Data Mapper pattern in Ruby (and, from what I can tell, it was the first one in Ruby). If you're used to ActiveRecord, it may feel a little awkward at first because suddenly your objects stand on their own, but this actually gives you a significant amount of freedom in how you structure your objects.

What makes Perpetuity awesome?

Because I love lists…

  • Your objects are whatever you want rather than forced subclasses of a library base class.
  • The query syntax is very similar to Ruby's Enumerable module
  • Persisting entire object graphs is a one-liner for new objects (great for seed/test data)

Objects can be whatever you want

With most ORMs, your persisted objects are required to be subclasses of some library base class. Some ORMs do this the least evil way and let you include the persistence behavior as a mixin, but that's still imposing.

Perpetuity allows your objects to be POROs (plain-old Ruby objects) or you can use gems like Virtus to give them a bit of a friendlier feel. As long as they save state in instance variables, Perpetuity can stick them into your database in a queryable form.

Query syntax

I get tired of writing Rubified SQL. I like to think of database tables/collections as arrays on disk, and we query arrays in Ruby using the select method and passing a block:

array.select { |object| object.name == 'foo' }

With Perpetuity, we query a database with the exact same syntax.

Perpetuity[Foo].select { |foo| foo.name == 'foo' }

The database adapter transforms this into its own query format:

/* PostgreSQL */
SELECT * FROM "Foo" WHERE name = 'foo'
// MongoDB
db.Foo.find({"name":"foo"})

You can find more information on queries in the project README.

Persisting entire object graphs

If you're creating a new set of objects, such as seed data, test data, or just a complex graph that gets created when a new user registers (we've all seen Rails apps with a dozen after_create hooks on the User model), you can persist them all by inserting the top-level parent object. It will automatically persist all of its attributes if necessary.

Install Perpetuity

If you'd like to try out Perpetuity in an application, simply add one of the database adapters to your Gemfile:

gem 'perpetuity-postgres'
gem 'perpetuity-mongodb', '1.0.0.beta'

Configuration can also be a one-liner:

Perpetuity.data_source :postgres, 'my_pg_db'

For a more robust configuration:

Perpetuity.data_source :postgres, 'my_pg_db', host: 'localhost',
                                              port: 5432,
                                              username: 'spiderman',
                                              password: 'nobodyknowsimpeterparker',
                                              pool_size: 20

This would go in a Rails initializer or a file required by your application on startup.

As of this writing, the Postgres adapter, the one most people have been waiting for, does implement most of Perpetuity's CRUD features but is missing indexing and a few of the niceties. The MongoDB driver fully implements all of Perpetuity's current features, though. To configure it, put :mongodb in place of :postgres in the config line above.

You can find a lot more information on usage in the Perpetuity project readme. If you find any problems with Perpetuity or either of the database adapters, please let me know via the issue tracker or a tweet (preferably with a gist showing how to reproduce).

Perpetuity PostgreSQL Adapter Coming Soon

Sep 18, 2013 @ 12:00am

For those unfamiliar, Perpetuity is an object-relational mapper that follows Martin Fowler's Data Mapper pattern. It is the first implementation of this pattern in Ruby.

Now that Perpetuity's API has been stabilized somewhat, I've been working on a PostgreSQL adapter. This has been the #1 feature request since I began working on it. I don't agree with the usual reasons behind this request, but I don't think it's an unreasonable one. You can't always control what DB you get to use and at least you'll be able to use Perpetuity if you can't use MongoDB in production.

The first thing this made me think of was that there's absolutely no point in keeping the dependency on the moped gem if you're not going to use it, but adding that dependency manually in your app's Gemfile seems unnecessary. Ideally, you shouldn't have to think about your dependencies' dependencies. Obviously, sometimes you do have to think about them, but I try to keep things as close to ideal as I reasonably can.

While discussing this with Kevin Sjöberg in a few of our pairing sessions (he's paired with me multiple times on Perpetuity and has given some very valuable feedback on a lot of it), we discussed separating the adapters into perpetuity-mongodb and perpetuity-postgresql gems. This seems like the best idea so far.

These gems will have perpetuity as a dependency so that you can simply put the adapter gem into your Gemfile and get both the adapter and the main gem, similar to how rspec-rails depends on rspec. This also allows for plugin-style database adapters, allowing Perpetuity to talk to other databases without itself knowing about every available one.

TwitterGithubRss