Thinking about Planets and Challenges

Earlier today, at the special Transatlantic Bonus Homebrew Website Club, we continued a discussion on trying a community challenge to create content, similar to some of what micro.blog does with their photo challenges.

One of the stumbling blocks was discovery on this, being distributed, how you can essentially follow people who are participating.

One proposal involved creating a site you log into using IndieAuth and then that would be how you’d join.

I started contemplating simple webmentions. The same way you RSVP to an event…you should be able to create a page for a challenge and have it receive webmentions, which would generate the feed.

So, that is what I’ve been contemplating all afternoon since. The page would work like an old-style planet. A planet is a site that aggregates feeds from a variety of sources with a particular theme or community.

Using webmentions as a publishing avenue is what Brid.gy does. So, there are a few ways I thought this could work.

  1. Like the way Brid.gy does it, the post would be marked up with a u-syndication property, which would trigger a webmention to the page, but instead of it being seen as a comment, it would add it as an h-entry in a feed people could follow. To prevent abuse, there could be the same types of vouches/moderation you’d otherwise use. If you wanted to ‘take down’ a post, you’d use the webmention delete method.
  2. This would be the same, except using the u-category properties instead of u-syndication. So, why is this a thought? Because you are tagging it, but just linking it to a tag on another site. The argument for this vs u-syndication is that the syndication in this case is entirely at the discretion of the receiver…also it the URL is scoped to the feed, not to the individual post.

In both of these, it seems like a relatively easy thing to have your webmention receiver interpret this markup by generating an h-feed, either of reposts of the post, or a simple feed with just URLs to the individual posts.

This is something that could be easily built into any site that has webmention capabilities with a minimum of additional code.

So, have at it, what am I missing here?

IndieAuth Spec Updates 2022

Over the course of 2021, the IndieWeb community had several popup sessions to continue the refining of the spec. This culminated in a release of the latest iteration on February 22, 2022.

I really enjoyed Aaron Parecki’s post explaining the changes during the 2020 season, and thought I’d write my own this time using the same format. I’ve been heavily involved in the update, but Aaron is heavily embedded in the OAuth world to a degree I’m not, and may have more insights I hope he gets a chance to blog about.

Many of the changes bring IndieAuth closer to OAuth 2.0, ensuring that an OAuth client could support IndieAuth with a minimum of changes.

Metadata Discovery

The first thing an IndieAuth client does is discover the user’s endpoints and redirect the user to their server to authorize the client.

Previously, the client would look for HTTP Link headers for the authorization and token endpoint. As we continue to expand into new use cases, we need a new way to provide information to clients.

The new metadata object servers publish and clients retrieve not only identifies the location of the various endpoints(some of which are optional), but what the capabilities are of the endpoints.

Changes for Clients: Clients must check for a HTTP Link header or an HTML link element with a rel-value of indieauth-metadata. For the foreseeable future, clients should, for backward compatibility, still look for the authorization_endpoint and token_endpoint rel values.

Changes for Servers: The server has to publish the link values for the client to find, and at that URL return a JSON object with properties containing information about the various endpoints. You may wish to place it in the .well-known path, for compatibility with other OAuth 2.0 implementions, but this is not a requirement.

Issuer Identifier

In order to positively identify differing IndieAuth server, each one will now have a server identifier, indicated by the issuer parameter.  It is a prefix of the URL where the Server Metadata endpoint is.

This can now be checked to protect against attacks, as IndieAuth clients interact with multiple servers.

Changes for Clients: Clients must now check that the issuer identifer returned from the authorization endpoint is valid and matches the one provided in Server Metadata.

Changes for Servers: When the authorization endpoint builds the redirect back to the client it will include the issuer identifier. The issuer identifier will be provided through the new metadata endpoint.

Refresh Tokens

Refresh tokens are something were always permitted in IndieAuth, but people didn’t know it was an option because it wasn’t described.

Changes for Clients: Clients should note whether tokens have an expiry and be prepared to request new tokens using the refresh token process. The new metadata endpoint, if implemented, would advise if a server supported the refresh token grant type. The only negative to not implementing support is that when the token expires, it would be a poor experience for the user to have to reauthenticate.

Changes for Servers: Servers are not required to implement short-lived tokens and refresh tokens. But if they choose to, they would have to support the refresh_token grant type in order to allow clients to get new tokens when one expired.

Revocation Endpoint

The previous version of the spec overloaded the token endpoint to provide revocation with the action=revoke parameter.

Changes for Clients: Clients should support discovering the new endpoint through the server metadata endpoint and utilizing it.

Changes for Servers: Servers may wish to support the old revocation method for backward compatibility for the foreseeable future, but should implement the new endpoint.

Token Introspection Endpoint

This new version introduces the token introspection endpoint, discoverable through the new metadata endpoint. This replaces the previous token verification process with one based on the OAuth 2 Token Introspection process. This means also a change to the response.

The major difference between this method and the prior one is that the previous method was a GET request, this is a POST, and requires some form of authentication.

Changes for Clients: None….the token verification is meant to be done by resource servers, such as a micropub endpoint if not coupled with the IndieAuth endpoints. Some clients may have been using the verification process, and must remove this.

Changes for Servers: The introspection endpoint is also optional. The old GET option may be retained for a time, but it is best to discontinue as soon as possible as the previous verification endpoint was not meant to be used by clients.

New User Info Endpoint

A previous update to the spec added a profile scope and a profile return to the authorization response. This addresses the scenario where a client wishes to refresh that profile information by allowing for an optional user information endpoint, discoverable via the metadata endpoint.

Changes for Clients: Clients supported/using profile information should, if a user information endpoint is available may choose to query it periodically for updated information. This would allow for refreshing avatars and display names automatically.

Changes for Servers: Implementing a userinfo endpoint is, of course optional. In most cases, if you were returning the profile information in the authorization response, it should be relatively easy to add the endpoint.

Clarification of Profile and Its Scope

There were questions regarding the definition of the return values for the profile information, which were clarified in the update, and more significantly, the application of the profile scope…specifically, could you issue a token with only the profile scope and what that would mean.

The language of the previous update made some individuals believe that a token would not be issued if the request contained only the profile scope. This was clarified.

If you need a token, you would redeem your authorization code at the token endpoint…which would allow you to have a token with just a profile scope…which could work well for the new userinfo endpoint. If you don’t need a token, just to know the user logged in, you can do the same redemption at the authorization endpoint.

Change for Clients: This should be addressed as per use case above. Namely, if you need a token vs not needing one.

Changes for Servers: If you implemented this during the prior update, and set it so you could not get a token with a profile only spec, due a misread of the intentions of the specification, you should change this. It shouldn’t affect any client.

 

Last week, I did a read through of the Queens Bus Redesign Proposal. Only 515 pages. It looks like there are some improvements, but still some issues. Now I’m watching YouTube commentators complain about how bad it is. These people didn’t clearly read closely before they recorded their takes.
Tomorrow is World Backup Day, so I’m embarking on redoing my network attached storage and associated backup process. The long part is waiting and testing new hard drives. I have to, make another backup of the data locally, remove all the drives from the NAS, install the new SSD OS drive, and the bigger data drives, do a complete self test(which for a 10TB drive is about 16 hours). Then restore from the backup…test…if the restoration doesn’t work, remove the new drives and bring back the old 3TB drives and start all over again…I’ll be done by…sometime in April.
Replied to https://twitter.com/hagengraf/status/1505554797885378569 by hagengraf (Twitter)

Now #UX
We want to change the existing settings page of #webmention #WordPress plugin (left).
Carolina and @JasonRouet are working on that (right)#CFHACK2022 #cfhack pic.twitter.com/VXkeCmkoFO

We’ve been working on the 5.X branch on GitHub on and off for a while, which redoes the underlying structure. Haven’t gotten to the UI yet.

Bookmark Links Plugin for WordPress Ready for Beta Use

My creatively named Bookmark Links plugin for WordPress is now available for beta use.

This is an enhancement of the Links feature in WordPress, which has been disabled by default for a decade now. The database table for this still remains, though I’ve extended it with a separate metadata table, which uses the WordPress metadata API.

The fun of this project was trying to add all of the features to the Links feature that WordPress might have added if they’d continued the feature. So, all the enhancements made to comments, posts, etc. That means an improved interface(the admin list didn’t even have pagination), as well as a REST API endpoint, and more.

I’ve hooked it up to an Android app, which allows me to share URLs to it via the REST API and save them.

The original feature was designed as a blogroll…this is designed as a bookmark store. It has a built-in read later indicator, and various other pieces of metadata. I also added an import and export option just to cover myself.

For sharing with others, in the admin, there is an option to publish a single post as a bookmark post or multiple bookmarks in list form as a post.

There is a lot more I would like to do with it, but I’d love to see people using and suggesting input.

Meta Tags to Microformats

Earlier today, Jamie Tanna announced the opengraph-mf2 library and hosted project. It takes OpenGraph meta tags and converts them to microformats.

I do the same thing as one of the many pieces of my somewhat messy Parse This library. Parse This, which is designed to feed WordPress plugins, forms the basis of the reply-contexts in the Post Kinds plugin, the parsing for the Yarns Microsub plugin, and my newly released bookmarks plugin. In all cases, it tries to extract as much data about the URL sent to it, and return it in microformats 2 json, or the simplified jf2 format.

Jamie’s code is a simple 80 lines that takes a few tags and tries to convert them. I ran through every meta tag I could find by looking at dozens of different sites, so I was inspired to document same.

First of all, if you look at MDN’s definition for the meta tag, it states that if the name property is set, the meta element applies to the entire page, but if the itemprop property is set, that’s user-defined metadata. The content property contains the value for the name attribute. There is no mention of the attribute property in the HTML spec, but it is mentioned in the OpenGraph protocol.

I take name, property, or itemprop and map it to the key in an associative array, then content is the value. For values with curies(:), I use that to create a nested array, which is what I use to map properties.

There are common classic meta names that are longstanding and defined in the HTML specification, such as author, description, and keywords. If nothing else, this might generate some simple information.

Moving up a level to OpenGraph…there are several common metadata fields, namespaced with og.

  • og:title – this would map to p-name
  • Media – Some media has the :secure_url addition for the https version of the image. This is still used, although the modern utility is sometimes questionable.
    • og:image – this would map to u-photo.
    • og:video – this would map to u-video
    • og:audio – this would map to u-audio
  • og:url – this would map to u-url
  • og:description – this would map to p-summary
  • og:longitude, og:latitude can map to the equivalent location
  • og:type – The type is a bit harder to map, but can be used as hinting otherwise. Article as a type would be considered h-entry, profile would be h-card, music and video types would be h-cite.

Of the various types, music and video types are not really represented well in Microformats. So let’s focus on article first.

  • article:published_time – mapped to the dt-published property
  • article:modified_time – mapped to the dt-updated property
  • article:author – mapped to the author property

Many of the types have a tag property, that can have one or more tags…which get mapped to category.

Jamie opted to map the Twitter namespace properties as a secondary factor. I opted not to. The namespace is from their Cards specification, which is really just another OGP namespace. The problem is that they don’t provide an author name or website, only their Twitter handle. The majority of sites I viewed had both the og and the twitter namespaces, and I never got anything from the twitter namespace that wasn’t in the og namespace except Twitter specific details, which I wasn’t interested in. Facebook was responsible for OGP, so most people want to cover both sites, so they have both.

I did opt to look for the custom namespace for FourSquare venues, which is playfourquare, for latitude longitude. I also considered the presence of the namespace to indicate a FourSquare venue, and therefore an h-card.

  • playfoursquare:location:latitude – maps to p-latitude
  • playfoursquare:location:longitude – maps to p-longitude

After the OGP tags, I also looked for some other common meta tag names.

Some academic sources use Dublin Core properties in meta tags:

    • DC.Creator – p-author
    • DC.Title – p-name
    • DC.Date – dt-published
    • DC.Date.modified – dt-updated

Parse.ly, which is part of WordPress VIP, has its own markup.

  • parsely-title – p-name
  • parsely-link – u-url
  • parsely-image-url – u-photo
  • parse-type – post is h-entry, index would be h-feed
  • parsely-pub-date – Publication date
  • parsely-author as p-author
  • parsely-tags as the p-category
  • They also offer the property parsely-metadata for other fields which is json encoded.

I also convert JSON-LD to microformats, but that’s another story

 

Chapel Trail Nature Preserve
450-acre passive park that was established in the 1990s. The wetlands have become home to 120 species of birds, deer, marsh rabbits, alligators, snakes, turtles, largemouth bass, and insects. This nature preserve includes a 1,650-foot long boardwalk, a pavilion for observation, and canoe rentals on Saturdays.