Creating a prototyping pipeline
This is a retrospective on work done earlier this year.
When designing a game, it is important to get from concept to playtesting as quickly as possible. This is advice given by Fullerton (Game Design Workshop), Schell (The Art of Game Design), and in a way, Steve Krug (Don’t Make Me Think, where he talks about usability testing with real people).
Krug says that you can uncover 80% of the usability problems with 5 usability tests. I’ve not experimentally verified this, but anecdotally and at arms-length it seems correct.
Fullerton talks about the importance of even low-fidelity prototypes for playtesting. The important thing is that, especially early on, you’re really vetting out the interaction models. It doesn’t matter what the media is, really, you just want to know if sequence flow, flow control, state tracking, etc. just works. And if it doesn’t (which it probably won’t), where and why, so you can iterate.
For me, personally, I need tooling where I can easily:
- scale variable content
- modify the content
- convert the content into physical media
- playtest with physical media
Since my dayjob profession is in tech, that is an asset I have available to me for items 1 and 2.
Scaling & Modifying Variable Content
Structured data files
In my line of work, there is a type of file called “YAML” (“YAML Ain’t Markup Langauge”, though I like to think of it as “Yet Another Markup Language”). A YAML file looks like this:
cards:
non_co_noobs:
defaults:
class: recruit
base:
rank: *contractor0
veteran:
rank: *contractor1
The tab whitespace is important and defines a structured hierarchy. The “cards” line is a collection that contains a collection called “non_co_noobs”, which in turn contains a collection called “defaults”, which has a property called “class” that has the value “recruit”.
The two items beginning with *
are special. The *
operator indicates to the YAML parser that it should look for a collection I defined elsewhere. In this case, the collection is “contractor0” – it when takes that collection and imports it right at that spot, as if I had typed it out there.
This is very important for the “scaling variability” requirement – I can define some reusable blobs of data that can be changed in one place, but flare out those changes to multiple places where they are used. (In this case “contractor0” had a set of properties that were the same everytime).
Most importantly, the YAML files are, with a little practice, pretty readable. The code editor that I use also lets me collapse different layers of the hierarchy, so I can visually focus on the parts that I am trying to edit. ie.
cards:
non_co_noobs:
defaults: …
voidminers:
- species:
<<: *provincial_calayat
base:
com: 2
rea: 1
cha: 2
“defaults” still exists there, but all the data under it is collapsed.
Parsing the structured data files
I then wrote some scripts that would load this file and convert the content into little software objects that I can reference like this:
@cards[:non_co_noobs][:defaults][:class]
=> "recruit"
Importing into template service
After creating those structured data files, I wrote a script that takes those objects and turns them into a CSV file (comma-separate values, think “spreadsheet but very simplified”).
Once it is in CSV format, this is a very portable file format that many services are capable of consuming. I have a subscription to the website dextrous.com. This is an online tool that allows you to rapidly produce printable prototypes using templates, and you can populate those templates with data from CSV files.
On the dextrous website, I created some templates for the different card types, mapped each column in the CSV file to differente fields in the template, and then imported the CSV.
There was a lot of iteration at this step, refining the layout, renaming columns, realizing I needed a few additional columns, etc. But once I had it working, the flow became very smooth:
- Decide what value I wanted to change
- Update the data in the YAML file(s)
- Run the parser script to convert the YAML to CSV
- Upload the CSV into Dextrous.
✅ This process so far satisfies the “scaling and modifying variable content” requirements.
Convert the Content into Physical Media
One of the best, and probably the most crucial, features of Dextrous is that it will take a given card size and render a list of cards into a printable PDF sheet.
My process here is:
- Generate the PDF
- Print the PDF on my color inkjet printer
- Use a sliding paper-cutter, cut the cards individually
- Fill a stack of card-sleeves with any blank or playing card, and one of these printouts
The paper cutter I use a Fiskars Deluxe Paper Trimmer. I’m on my second one now; the first one lasted about a decade, and I replaced the blade twice. Eventually the wire rail finally shredded and I had to replace it.
The sleeves I use are just standard cardgame sleeves, usually with colored opaque backs and transparent fronts. I like using colored sleeves because it makes it easier to differentiate cards when you have multiple card types / decks of cards for a game.
At this point it’s just a matter of sitting down to playtest.
Separate from the cards, I have a small kit (tacklebox-esque) with various generic components – pawns, counters, tokens, etc. These are versatile stand-ins for tracking game state.
✅ This process so far satisfies the “convert the content into, and playtest with, physical media” requirement