The Green Shed

Alan Jacobs nails it:

Warzel errs here in assuming that when people in MAGAworld make declarative statements, and endorse or amplify the declarative statements of others, they do so because they believe those statements to be true.
They don’t; nor do they believe or know them to be false. In my judgment,
truth and falsehood do not at any point enter the frame of reference — such concepts are non-factors, and it is a category mistake to invoke them.

In MAGAworld, declarative statements are not meant to convey information about (as Wittgenstein would put it) what is the case. Declarative statements serve as identity markers — they simultaneously
include and exclude, they simultaneously (a) consolidate the solidarity of people who believe they have shared interests and (b) totally freak out the libtards. That’s what they are for. They are not for conveying Facts, Truth, Reality — nobody cares about that shit. (People who call themselves Truth Seekers are being as ironic as it is possible to be.) Such statements demarcate Inside from Outside in a way that delivers plenty of lulz, and that is their entire function. In that sense only they articulate a kind of dark gospel.

Read the whole thing.

HDR Sample Photos

Related to The State of HDR Photography I thought it would be helpful to have some reference files to check over time as tools and the ecosystem evolve. So here are a few images, in various formats, that may or may not look totally weird in your browser.

Clicking/Tapping on a photo will take you directly to the image file.

HDR Video

Here’s a little HDR video embed. You should see a bright white rectangle:

And, if HDR images are working, that’s how bright the brightest parts of the images below should appear.

Buildings

Subject: Two buildings in Tokyo, Japan. A mostly cloudy sky with strong sun glint on the side of the buildings, and deep shadow at the base.

AVIF

JPEG XL

JPEG w/ Gain Map

Cat

Subject: Cat on the video screen on the side of a building in Tokyo. The edge of the display is 100% brightness. You should be able to see the details on the shirt of the man on the sign on the side of the building.

AVIF

JPEG XL

JPEG w/ Gain Map

Shinjuku

Subject: A crowded street in Shinjuku, at night. The lights should glow above the busy streets.

AVIF

JPEG XL

JPEG w/ Gain Map

The State of HDR Photography in 2024

Adobe recently released Lightroom Classic 14, which included a few HDR related updates:

  • Added file formats with HDR export support for ISO gain map
  • Experience HDR in secondary display with added multiple-view support

Being able to create, edit, and view HDR photos in my photography flow is something I’ve wanted for a long time now. Between the latest features in Lightroom and improved support for HDR photos in iOS and MacOS, we’re almost there.

What I Mean By “HDR”

“HDR” has been used to describe tone-mapping within the context of a traditional 8-bit image file format. It was popular years ago when digital cameras didn’t quite have the dynamic range to capture a scene, so we’d end up bracketing exposures and combining them in post. The results were not always great. This is not what I mean by HDR (sorry for the confusion).

When I say “HDR”, I’m talking about editing and exporting photos to have a higher dynamic range, when viewed, than traditional SDR images.

The first wide-spread use of this kind of HDR was photos taken with an iPhone. By using the built-in camera app, you could take a photo of a scene with a very wide dynamic range. The phone would capture the scene, and when displaying the photo back it would brighten the areas of the display beyond your brightness setting, to create a wider dynamic range when viewed. By this point we’ve probably all experienced this kind of photo on our phones in one app or another. You’ll notice it as the light parts of an image suddenly “brightening up” after the photo loads in.

Why I Care About This

I love taking landscape photos, and whether landscape or any other subject, I’m a big sucker for high contrast images. I want to be able to share my photos with others and let them get that Brightest Brights and Darkest Darks viewing experience that I enjoy.

It’s also closer to reality. Our eyes and brain are able to piece together scenes with an astonishing dynamic range of 18-20 stops, yet a typical computer screen can only display 6-10. Modern HDR displays can push that up much closer to the range we see with our eyes, giving us a much more realistic experiencing when viewing photos.

The Editing Experience: Great

The editing experience has gotten Really Good at this point. Lightroom’s HDR editing features are intuitive and easy to use, and the display of HDR content is now possible in fullscreen view as well as the library loupe and compare views, in addition to the editing view. Simply click the HDR button in the editing pane, and crank up the highlight or exposure to get going.

Surely Lightroom will continue to offer more HDR features, but frankly it’s plenty good enough right now to do what I want.

The Exporting Experience: Good

Recently Lightroom updates have added the ability to export in JPEG w/ Gain Map, which expands the number of devices that can view exported photos. And LR still supports JPEG XL and AVIF output formats.

There seems to be some issues between JXL files exported from LR and displayed in Photos (see below), but it’s unclear if this is a bug from Adobe or a bug from Apple.

The Viewing Experience: Not Great

While editing and exporting photos in HDR is at least Good Enough today, sharing and viewing those files elsewhere is still Not Great. Having played around with various apps and settings lately, here’s what I’ve found:

  • Photos App on iOS 18 (18.0.1) will properly display AVIF, JXL, and JPEG w/ Gain Map photos in HDR, properly.
  • … with the exception of JXL images, in which it seems to blow out the highlights. I’m not sure if this is an Apple bug or Adobe bug, but right now I think it’s an Apple bug.
  • If you want to get these images into Photos, you must use air drop or drag-and-drop them into Photos. Messages will convert the image to JPG when sending.
  • (Update: Except maybe you can adjust the file type when sending by taping “options” and choosing “current format”. This is not obvious in any case.)
  • Photos App on MacOS 14.5 will display JXL files in HDR, but not AVIF or JPEG w/ Gain Map.
  • Safari and Preview should display them correctly in MacOS 14.6+, but earlier versions of MacOS will not show them in HDR.
  • Some versions of Chrome, somewhere, might work. Maybe.

So, showing off your photos in HDR is still a crap sandwich, unfortunately. If you have patience you can import these into your own photo library easily enough, but good luck sharing them in any way beyond holding up your phone and showing them to someone else.

Hopefully in a few years we’ll have all of this ironed out and it’ll be easier to share these with others.

My Workflow

Right now, in October of 2024, my workflow is:

  1. Take photo with Sony A7r iV.
  2. Edit in Lightroom’s HDR mode,
  3. Export as JPEG XL, HDR, Rec.2020 color space.
  4. AirDrop file to my phone.
  5. Enjoy by myself, or hold my phone up to show others.

Conclusions

HDR support for editing and exporting is decent. If you’re into this, it’s a fine time to edit some HDR exports, and put them with the rest of your photos. Hopefully in a couple years it’ll be much easier to share these, and you can go back to the well and pluck those old photos out to share around.

Progress is being made, and eventually this will work everywhere. Just gotta be patient until then.

Came across an article mentioning a merger between Alaska & Hawaiian Airlines which noted, “the name Northwest Airlines has already been used”.

And that reminded me of my brief affinity for NWA in the late 90s/early 00s. Not because of the real airline (I think I flew on them once or twice), but because I was part of a virtual NWA (I think via VATSIM (https://vatsim.net)), flying MS Flight Simulator with other sim geeks all across the globe.

Suica Cards, Contactless Passes, ECP

Just got back from a trip to Japan (hope to write much more about that in the near future). One of the stand-out experiences was using a [Suica] (https://www.jreast.co.jp/multi/pass/suica.html) card for transit, storage lockers, and a few other purchases.

If you setup the card on your iPhone, you can turn on “Express Transit” mode, which lets you tap your phone on the terminal without having to unlock your phone. It’s a super convenient, fast way to move through the system.

Using this all over Japan got me curious about some of the technology behind it all. I haven’t been able to sift through all the details, but here’s what I’ve found so far:

  • Apple covers the security angle of Contactless passes in the Apple Platform Security Guide here.
  • The Apple VAS (Value Added Services) protocol has been reverse engineered with details on GitHub here.
  • If you want to use your own passes with hardware, you’ll need some certificates from Apple (see the security guide, above). Depending on who you are/what you’re doing, it might not be worth the hassle. There are companies like Pass Ninja who you can piggy-back off. (I have not used Pass Ninja and cannot recommend them; they might be great, they might be terrible.)
  • You’ll need hardware that has compatible software for ECP (Enhanced Contactless Protocol). You can find the reverse-engineered details in this guide, which includes sample code you could use on an Arduino.
  • You can probably make the software work with this NFC reader from Adafruit.
  • This tech is used by much more than Suica. There are office badges, car keys, gym passes, and more.

Sitting on the Shinkansen drinking a beer at 175mph. This is incredible.

Internet speeds seem to be about 25-50KB/s up here over the ocean. Not exactly Starlink speeds at the moment (apparently that’s coming in 2025).

Over the Pacific, on my way to Tokyo. This is my first time every flying business class, and I would say so far it is not overrated.

The last day at work before a long vacation is the worst. Senioritis is killing me.

This is a test title

Here’s my text for this test post. Let’s see if this works.

A Leaky Space Station

Turns out the ISS is leaking

in February of this year NASA identified an increase in the leak rate from less than 1 pound of atmosphere a day to 2.4 pounds a day, and in April this rate increased to 3.7 pounds a day. Despite years of investigation, neither Russian nor US officials have identified the underlying cause of the leak.

CA law banning 'Buy' in context of Licensing

I love the intention behind this law in California banning the use of “Buy” and “Purchase” in cases when you’re merely licensing the underlying item. Haven’t read the actual law, so I have no idea if this will make a difference, but I’m a big proponent of this idea.

Web Ring as Subway

I like this idea by Jake ay Polymathematics. It’s a fun take on a web ring idea, imagining it as stops along a subway. An implementation that didn’t require JS would be even better. :wink:

A new way of computing?

Ben Thompson’s latest piece, Enterprise Philosophy and The First Wave of AI is an interesting (and long) take on how he imagines AI being built into the real world.

It’s worth reading, but I think an item from the summary stands out to me:

My core contention here, however, is that AI truly is a new way of computing, and that means the better analogies are to computing itself. Transformers are the transistor, and mainframes are today’s models. The GUI is, arguably, still TBD.

(Emphasis mine).

I’m not convinced (yet?). I still have not seen an implementation that demonstrates substantially different capabilities than previous tools, with two exceptions:

  • Visual media manipulation (generate images, videos, remove objects)
  • Slop text generation

While visual media manipulation is impressive (I use the latest features in Lightroom all the time), it still feels augmentative more than generative so far as it’s impact. I’m probably under-appreciating the degree to which “generate an image of X for me” will be impactful, but it feels like the outcome is far more likely to be more images generated than it is replace the human who builds our UI1.

Same with text generation: How many people are actively writing slop right now, who are in danger of being replaced by a computer doing it? More than zero, I’d wager, but this doesn’t seem society-transforming (maybe in its annoyance, but that’s something else).

Neither of these feel like “a new way of computing” to me. So perhaps this AI future is in “Agents”. But where is the demonstration that this technology can lead to these agents existing, let alone working well enough to be considered a paradigm shift?

The improvement curve of these LLMs seems to be flattening, and having the LLM run it’s own multi-shot answer to each question (like o-1 is doing) might be more effective, but it’s hardly a game-changer, IMO. So where are the orders-of-magnitude improvement going to come from?

I’m open to the idea it could happen, but I’m not seeing it as remotely inevitable along the existing axes of improvement.

Maybe Ben’s right, and this really is a generational transformation, in which I’ll struggle to adapt, if I adapt at all, and that I’m too entrenched in existing ways of computing to think outside the box.

Or, maybe this is more like grocery store self-checkout, where every large organization will, behind the dream of free labor, spend considerably more money than they could ever hope to save building complex, obnoxious contraptions that fail to do the job as well as a 17-year-old clerk.

MidJourney is every going to be, IMO.

  1. Declarative UI frameworks are arguably more human-replacing that 

Enjoying Tailscale

Finally started playing around with Tailscale recently, and I’ve really been enjoying it. It feels like exactly the kind of computer technology I always love: make an annoying problem dead-simple to solve, and leave it at that.

So what is Tailscale? For me, it’s a dead-simple way to setup a VPN for my home network and then access that network on any of my devices, from anywhere.