During tonight’s online Indieweb NYC Meetup, I asked the question: If in 100 years, all historians had to learn about you was what was on your website…would that be enough?
I just pushed the first set of improvements to Parse This to support JSON-LD.  Parse This takes an incoming URL and converts  it to mf2 or jf2. It is used by Post Kinds and by Yarns Microsub  to handle this.

So, assuming the default arguments are set, the parser will, for a URL that is not a feed, look for microformats. If microformats lack a summary, content, or references(for jf2) property, it will try to parse JSON-LD. If this produces no results, it will try OGP and Meta Tags.

Even though I’m not fond of Schema.org or JSON-LD, more sites have rich data in  this format than Microformats and I want to be able to find author names and other properties  to display proper attribution and rich link previews for my replies, bookmarks, etc.

This will be available from the Github Repo for Parse This immediately, and will likely be available in the next refreshed release of any plugin using the library. JSON-LD will not be of much use to Yarns, as it does not parse single pages, only feeds at this time.

The Truth about the Post Kinds Plugin

WordPress has the concept of post types, which are custom content types. The post type, which is the default type in WordPress, is a post of type post. Custom post types are also used to store attachments, menus, revisions, etc…all sorts of things that aren’t traditional posts.

Post Kinds as implemented in the Post Kinds Plugin are not a post type. They are actually a taxonomy…or a tag. This tag tells the system to use a particular template to render the post, and allows for auto-generation of things like feeds and archives. This is based on the Post Formats system built into WordPress.

This means that if you turn off the plugin, all the custom rendering is gone and you get a standard post with all the extra information missing.

However, in the Gutenberg age, attributes are stored in the content field, and parsed out into blocks. In theory, a microformats parser actually could be inserted to parse out the microformats as well into blocks instead of or in addition to their own html comment based markup.

But, this means that some of the data structures of Post Kinds are being rethought.

I mentioned previously I am adding a citation post type…as in, a WordPress custom post type. This means there will be a tab in the sidebar in the WordPress admin for creating just these links. If you want to share them in list form, post them as a reply, etc.

If the post post type reflects an h-entry, I am simply storing the nested type, h-cite(citation), separately. But from the user perspective, it should allow for improvements in the classic UI, a new implementation in the block editor, or a completely new UI that just handles some of the Indieweb types.

So, create a citation, tag it with the Indieweb post kind, and it will render appropriately….similar to how it is done now.

Replied to Identifying Post Kinds in WordPress RSS Feeds (danq.me)

I use the Post Kinds plugin to streamline the management of the different types of posts I make on my blog, based on the IndieWeb post types list: articles, like this one, are “conventional” blog posts, but I also publish notes (which are analogous to “tweets”), reposts (“shares” of thin…

I like the idea of adding the kind of post to the RSS feed to identify it, although not everyone will. I’ve opened an issue to remind myself to explore a version of it in future. Working on a major change to the plugin now.

Planning out the Next Generation of Post Kinds

I’ve been working on the Post Kinds plugin for several years now. It allows the enhancement of WordPress posts into the Indieweb types of posts.

But in the current environment, the question I keep getting asked is: When will it support Gutenberg, the WordPress block editor?

This is something of a problem for me as I do not know how this would work. I really need to talk it out with someone more embedded in the Gutenberg world and see what ideas come.

So, to prepare for that, I need to make a major shift in the way I have Post Kinds set up to disconnect further what happens on the frontend to what happens on the backend. And since Gutenberg uses the REST API to get data, I need to add an endpoint to work with this.

So, there are a few experiments in this regard I am working on. In a previous version of Post Kinds, I switched from storing metadata about a photo in the photo post to storing it in the attachment. WordPress uses a custom post type for attachments to store information about video, audio, images, etc.

This means you can edit the attachment, and separately attach it to the post. So, one of the first things I want to do is enhance that, and add the ‘citation’ post type I’ve been contemplating.

Loosely inspired by the old WordPress links manager, this would store the metadata for individual URLs. It would be necessary as I want to be able to turn my website into a Pinboard-esque bookmark archive. Not every bookmark is to be shared in my feed, or even public.

So, ones that are shared would be attached to a post, similar to the way attachments are. But otherwise could be addressed as simply bookmarks.

Over the last week, I experimented heavily with a simple fields API to generate the forms fields by configuring an array. The idea here is to define the fields I need for any given post kind, and therefore generate them automatically, rather thanm writing templates.

So, both of these ideas are elements of what I want to build. The challenge is the UI to bring these pieces together. It is going to require some trial and error on my part to get it right.

Right now, after having merged in the Fields file, which I will eventually hook up to the form interfaces, I will be finishing the Citation Edit UI, and then seeing how I can link that to posts, taking the attachment model.

Then working on how to properly save and edit the combined post.

Thoughts?

My name is David Shanske, david.shanske.com . I attended the last two Austin IndieWebCamps, but I was unable to attend this one. I am from New York, but I’m currently remoting in from Sofia, Bulgaria. This post is a great example of some of the things I like to play with. I have my current location imported automatically, the timestamp changed to where I am, and the weather. I can tick a box and syndicate to micro.blog, Twitter, etc. I am known in the community as the maintainer and a contributor to many of the Indieweb plugins for WordPress. I’m available remotely if someone at the camp needs help.

Webmentions for WordPress 4.0.3 Released

Yesterday, version 4.0.3 of the webmention plugin for WordPress was released. Notably, this includes fixes for two issues.

  • The auto approve functionality was not functioning for some time. Props to Jeremy Felt for identifying where the issue was.
  • The Avatar code was not identifying scenarios where Semantic Linkbacks turned a webmention into a comment, due to the fact that the comment type for comments is actually an empty string.

As always, thanks to Matthias Pfefferle for being the project author. I do not think he gets enough credit for dealing with all the pull requests I send him, or for the many Indieweb or Indieweb adjacent plugins he has created.

Fixing Times on EXIF

I’ve been working on a patch for WordPress that involves fixing the incorrectly stored timestamp stored as part of WordPress image metadata. I already do something like this in my Simple Location plugin, but I’ve found a way that works more simply.

To summarize the issue, there are multiple datetime properties stored in your photos. The creation time of the file(DateTime), the time the image was created(DateTimeOriginal), and the time it was digitized(DateTimeDigitized. There are two separate properties for the GPS time and date.

The problem being that the GPS time and date, which may not reflect when the picture was taken, but the last GPS lock, is in UTC, but the rest have no defined timezone. With photos taken on a cell phone, if you have GPS time and date, you also have location, which can be used to figure out timezone as well.

Offset fields for the three datetime stamps mentioned were introduced in EXIF version 2.31, which came out in July of 2016. PHP returns these with its exif_read_data function as ‘UndefinedTag:0x9010’, ‘UndefinedTag:0x9011’, ‘UndefinedTag:0x9012’, reflecting DateTime, DateTimeOriginal, and DateTimeDigitized respectively.

I don’t support these, and just learned about them, and intended to support them…until I read EXIF version 2.32, which just came out last year and no one is using yet. They are retiring all of those offset formats they introduced in 2.32 and switching all their date properties to ISO8601. They are also changing GPS Coordinate storage, if you use that.

So any code has to check the EXIF version, and if 0231, check the offset. If 0232, use just the properties, and if less than 0231, try to derive from the GPS location or timestamp.


function exif_gpsdatetime( $datestamp, $timestamp ) {
return new DateTimeImmutable(
$datestamp . ' ' . sprintf( '%02d:%02d:%02d', (int) $timestamp[0], (int) $timestamp[1], (int) $timestamp[2] ),
new DateTimeZone( 'UTC' )
);
}

function derive_exif_timezone( $datetime, $gmtdatetime ) {
$seconds = $datetime->getTimestamp() - $gmtdatetime->getTimestamp();
return new DateTimeZone( timezone_name_from_abbr( '', $seconds, 0 ) );
}

Here are the two functions that I’m using to calculate timezone offset without using geo coordinates. You can also use a timezone lookup database for the location, but this adds another dependency.

If you want to go solely using built-in PHP functions, you can get the datetime stamp(whichever of the properties you are using), set to assume it is UTC, figure out the difference between the two times, and use that to calculate a timezone, which you apply to the original.

WordPress takes the digitized property and converts it to a timestamp using strotime, which is usually an inaccurate timestamp and therefore useless. I am trying to get this fixed.