Continuing yesterday's work, today I added support for parsing Twitter URLs to XRay.
There were a couple tricks to make this work. I wanted to make sure that Tweets are always expanded to include the most data possible, and also wanted to avoid needing to make a bunch of HTTP requests. Scraping from the twitter.com website wasn't an option, since some of the data isn't available or would require additional HTTP calls to fetch. (For example I would have to fetch every t.co URL to expand them.) So I set to work using the Twitter API to fetch the tweets.
I didn't want to hit Twitter rate limits by sharing all XRay access from a single account, and I also didn't want to add a database to XRay so that it can continue to be stateless. This meant that the only option was for the XRay client to pass in its own Twitter credentials when fetching twitter.com URLs. This is an acceptable compromise for me, since it keeps XRay simple, and also avoids me needing to officially get a Twitter app approved. If you want to use this feature, you can go to dev.twitter.com and create an app and access tokens for your account right there, which doesn't even involve writing any code. I've updated the XRay readme with further instructions.
Now p3k will include my Twitter credentials when making a request to XRay for a twitter.com URL, and XRay uses my Twitter credentials to fetch the tweet from the API.
So now, whenever I repost something on twitter.com, the contents are expanded and my website shows the full Tweet!
"In a perfect world, we would have been able to use the original URL structure with the hashes for our application, and served both the site and the app simultaneously. This would be the absolute best thing for users of all kinds, and this debate would not be happening."