Since beginning the jf2 spec, I've continued developing XRay, and its format has diverged from the original jf2. Tonight I spent a while trying to reconcile the changes to submit a PR to the spec. I was unable to come up with a short PR, and instead got drawn in to thinking about the motivations behind a simpler mf2 JSON format to begin with.
I use XRay in a number of projects for various purposes.
There are a number of things that XRay does when extracting the mf2 data.
nameproperty if it's a duplicate of the
publishedis always a single string, and
categoryis always an array.
refsobject, making it easier to consume.
authorproperty is a simplified
h-cardcontaining only name/photo/url properties that are single values.
As you can see, a lot of what XRay is doing is cleaning up some of the the "messy" parts of Microformats JSON. Not necessarily the specific JSON format, but more about the overall structure, such as how an author of a post can be in many different places in a parsed Microformats JSON object. This is not to place blame on Microformats, since what it's doing is creating a JSON representation of the original HTML, and allowing authors flexibility in how they publish HTML rather than prescribe specific formats is a core principle.
What this means is XRay is actually acting more as an interpreter of the Microformats JSON, in order to deliver a cleaned-up version to consumers. Most of my projects that use XRay could actually be considered "clients", such as how I use XRay to parse posts for my reader, whether that's output to me in IRC or re-rendered as a post on IndieNews.
My primary need for an alternative Microformats JSON format is actually a client-to-server serialization, where the client is getting a cleaned up version of external posts, and can assume that the server it's talking to is responsible for taking the messy data and normalizing it to something it expects. In this sense, the use case of jf2 is a client-to-server serialization, whereas the Microformats JSON is a server-to-server serialization. This would then be a core building block for Microsub, a spec that provides a standardized way for clients to consume and interact with feeds collected by a server.
The main current challenge in defining a spec for this use case is how tied to specific vocabularies it should be. For example, Microformats JSON says that every value should always be an array. However, there are a few properties for which it never makes sense to have multiple values, and creates additional complexity in consuming it, e.g.
location. It's easier to consume these when the values can be relied upon to always be a single value. With the
author of a post, the
author of an
h-entry may be an object or a string, making it more complicated to consume that when it can vary, so XRay's format always returns a consistent value. However this is tied to the
h-entry vocabulary, since other Microformats vocabularies don't have an
author property. In general, the success I've had with XRay's format is due to the fact that it makes hard decisions about what properties it returns, and is consistent about whether those properties are single- or multi-valued, in order to provide a consistent API to consumers.
I am just not sure how to balance wanting to provide that simplicity for consuming clients while also allowing flexibility in publishing, while also not hard-coding too much into a spec that might be obsoleted later.
I'm super excited to announce that DreamHost will be hosting our Homebrew Website Club PDX meetups for the rest of 2017! We've got all the dates planned out, so put them on your calendar!
The meetups are from 5:30-7:30pm at the DreamHost Portland office, at 621 Southwest Morrison St, 14th floor.
I wanted a way to quickly browse and share songs from my 100 Days of Music, so I thought I would make a page with links to each song. Clicking on any of the tracks opens up a video player with the description of the song.
I am curious to find out which songs people like the best, so to start with, I added some Google Analytics code to the page. I track events for each time a video is started, paused, when the video finishes playing completely, and if a video was interrupted by starting another. Hopefully this will provide some interesting data over time.
I want to add a more robust feedback mechanism, possibly even a simple "heart" button people can click, but I'm not sure how I want to handle that yet so that will have to wait until later.
I've made download links available for each track, in case you want to use these songs in your own projects! At the bottom of the page you'll see the copyright notice. I've decided to make these all available via a Creative Commons Attribution license, so feel free to use them for various projects! All I ask is that you let me know when you've used a song, preferably by writing a post about it and linking to the track on my website! That way it will show up as a comment on my post, like Marty's podcasts!
The URL of the website is:
and because emoji are fun, I made an emoji subdomain redirect to it as well, for no practical purpose:
The Micropub implementation report summaries had gotten kind of scattered around various URLs, so today I cleaned it up and consolidated everything. I also added the number of submitted reports to the home page, along with links, so that they are much easier to find.
I am making the report summaries all live under micropub.rocks, rather than be split between micropub.rocks and micropub.net. I updated the URL structure for the summaries on micropub.rocks to be more consistent:
The corresponding URLs on micropub.net now redirect to micropub.rocks. I also added a header bar on the spreadsheet views so that you can navigate back to the home page as well as to the other set of reports while viewing the spreadsheet.
Hopefully this makes things a little easier to find!
I normally don't like to launch a feature that's this rough around the edges, but I decided to anyway. I added a section to the OwnYourSwarm dashboard that will let you import a specific checkin by its Foursquare checkin ID.
When you click "Import", the processing flow for that checkin is started in the background. There is unfortunately no feedback in the UI on its progress yet, but in a couple of seconds you should see the checkin appear at your website. Shortly after, any coins, likes, and comments are sent via Webmention as well.
In the mean time, you can at least enter a checkin ID manually to trigger an import if any were missed because they were "offline" checkins.
I think this is the first time in the 100days project that I've worked on a project that is not my own! Today I added support for JSON requests to Known's Micropub endpoint. I also added support for JSON checkins that OwnYourSwarm sends.
I tried writing as little code as I could, and changing as little as possible about how it worked, so essentially I am just extracting the properties it knows about from the JSON request to the variables the plugin expects to find. This does mean that a few Micropub JSON features are still not supported, such as sending HTML content (Known seems to strip HTML tags from all content sent to it), and Known doesn't provide a mechanism for storing arbitrary nested JSON objects. However, I was able to get it to pass tests 200, 201 and 203 from the micropub.rocks test suite, which is enough for basic support.
It also is able to create checkins from the payload that OwnYourSwarm sends! I also made it download the photo that is attached to a checkin, rather than hotlink the Foursquare image URL.
Since I don't have commit access to the Known repo, I sent a pull request to Known with these changes. I tested everything with a local Known installation. Hopefully benwerd or mapkyca can merge the PR soon!
Hopefully this improves people's experience using tools like OwnYourSwarm and OwnYourGram with Known!
Today I wrote up documentation on OwnYourSwarm. It actually took quite a bit longer than I expected to write everything up. The documentation walks through each component:
Rather than repeat any of the information here, I will just send you off to read the docs! Please let me know if you have any questions! I hope to see some more implementations of people receiving checkins via Micropub soon!
I'm pretty excited to say that OwnYourSwarm is now backfeeding likes and comments from Swarm checkins!
Thankfully, the Foursquare API is well documented, and has quite reasonable rate limits. It also seems to have a well-documented change policy, so is unlikely to arbitrarily change out from under me. I'm hoping this backfeed feature will be relatively stable.
Like bridgy, I implemented proxy pages for individual likes and comments on Swarm. The page is marked up with h-entry, and includes the author name, photo, URL, as well as the comment text. Swarm also has the ability to send "sticker comments", which I render as an <img> tag in the comment body.
Regular comments look like you'd expect.
Likes look similar, and have fallback text in the comment body.
I took advantage of specific behavior I've seen on my checkins in order to build a polling schedule that won't overload my server. For the most part, people only like and comment on recent checkins. After a couple days, a checkin is unlikely to get any new comments.
When a new checkin is posted, the user's polling interval is reset. OwnYourSwarm will check for new responses after 30 seconds. If none are found, it will wait 60 seconds, then 2 minutes, 5 minutes, 30 minutes, 1 hour, then finally a few more long-term tiers: 1 day, 2 days, 7 days, 14 days, 30 days. Of course as soon as you post a new checkin, your polling interval will be reset to 30 seconds and will start the cycle over. This hopefully will provide a good balance between quickly sending feedback for recent checkins, while also finding feedback on older checkins as well.
The nice thing about the Foursquare API is they provide an endpoint for retrieving the last N checkins for a user, and the data returned includes the number of likes and comments. This means I only need to hit one API endpoint to retrieve the last 100 checkins and can tell if there is new activity on any of them. I then make another API request to retrieve the checkin details only when there are new comments.
OwnYourSwarm will now send webmentions for all the coins that Swarm awards to your checkins!
Here's a checkin on Swarm:
Here's how it looks on my website:
OwnYourSwarm creates a web page for each coin award on your checkin, then sends a webmention to your post!
Here's what one of the comments above looks like on the OwnYourSwarm web page:
Of course it's marked up with the Microformats2 h-entry, so that my website can parse out the icon, text and number of coins!
To get my website to recognize the number of coins awarded, I used a vendor-specific Microformats2 property, "p-swarm-coins". Based on the recommendations for vendor-specific properties in Microformats2, I chose the prefix "swarm" and the property "coins".
Now I'm excited about getting points on Swarm again!
An interesting feature of the Swarm app is how it handles photos uploaded to checkins. If you check in and attach a photo, the checkin is actually created before the photo is uploaded. If you're on a spotty Internet connection, you'll see this because your checkin will exist and you'll get points for it, but there won't be a photo yet. The app will then continue to upload the photo separately, retrying if it fails. This is actually a really great app design on the part of Foursquare, but does lead to some tricks with the API.
Since OwnYourSwarm uses Foursquare's realtime API, it will receive a POST request almost immediately, often before the photo exists at the API. This means the initial Micropub request might be missing the photo.
Today I made OwnYourSwarm send a Micropub update request to update your post after the photo is uploaded. When you post a checkin, if there is no photo, then OwnYourSwarm queues a background job on a 15-second delay. It will then check after 15 seconds to see if a photo exists, and sends an update request with the photo URL if so. If still no photo is found after 15 seconds, it will wait another 30 seconds and try again. This continues for the following schedule: 15 seconds, 30 seconds, 1 minute, 2 minutes, 5 minutes, 10 minutes, 30 minutes, 1 hour. We'll see if this is too much polling, but the rate limits on Foursquare are relatively high. (500 requests per user per hour). This does mean that every checkin with intentionally no photo will be requested from Foursquare 8 times.
I had originally planned on using this same polling schedule to later pull back responses to your checkins (likes, and comments), but Ryan pointed out that I can probably use a simpler and more efficient polling schedule since the Foursquare API provides a method to return the last N checkins.