The Green Shed

Golden Unicorn

Alpenglow illuminates Unicorn Peak just before sunset in Yosemite National Park, California.
Alpenglow illuminates Unicorn Peak just before sunset in Yosemite National Park, California.
Date Taken
October 04, 2022
Camera
SONY ILCE-7RM4A
Lens
FE 85mm F1.8
Focal Length
85.0 mm
Shutter Speed
1/20
Aperture
f/9.0
ISO
100
Keywords
Alpenglow, Landscape, Mountains, Sunset

AirPods Pro II

I got the first generation of AirPods Pro shortly after they came out, and I’ve loved them. They’re super small and portable, and while the noise cancelling isn’t quite up to the level of my Bose QC 35s, they are so light and easy to transport that I often grab them instead.

Unfortunately, my pair have been subject to the crackle of doom for a while now, and over the last few weeks it’s gotten noticibly worse.

It being Christmas time, I opted to get myself a little gift, and bought the second generation AirPods Pro — and they’re really great!

The noise cancellation in these is so good it feels like magic. The first generation ANC was great, especially for something so small. But it’s blown away by the new AirPods. I can have my music paused in the gym and it feels nearly silent. They’re amazing.

The new volume gesture on the “stick” is almost worth the upgrade on its own. I was always annoyed to have to reach for my phone to adjust the volume, but I didn’t realize just how often I was doing that until I had an alternative. The gesture feels natural, it works every time, and it really makes the experience of wearing the AirPods feel better.

The next big test will be wearing them during a flight. Typcially, I travel with my bulky QC 35s beacuse the quieting is so good, but with these new AirPods I think I can get away with leaving them at home.

All in all, this has been a really great upgrade! Highly recommended.

My question about all this is: And then? You rush through the writing, the researching, the watching, the listening, you’re done with it, you get it behind you — and what is in front of you? Well, death, for one thing. For the main thing.

Alan Jacobs, and then?

Recursive Regression

If tools like GPT result in the creation of large amounts of new content, and then that content gets ingested by the next generation of the models, at what point does that impact the quality of the new output?

Is all future output bounded by the quality of whatever was available broadly circa late 2021?

Thinking about ChatGPT Security

Imagine the attention these ML systems will attract once it becomes public that big companies/governments/individuals use them for any given task. Suddenly, there will be immense incentive to poison the datasets in subtle but impactful ways. Then we’ll see attacks that are both very direct (compromise the humans overseeing the “safety” of the system) as well as very indirect (massive content farms taking advantage of known weaknesses/limitations to poison the data pool(s)).

Imagine when someone hack’s one of these organization’s systems, then subtly alters the precedence used when ingesting information!

And then there’s figuring out how to inject the right prompts to leverage RCE vulnerabilities!

And the more “capable” they become, the bigger the attack surface!

Being Human

From this thought-provoking article by Jessica Martin (via Alan Jacobs):

in our world dominated by online representation, we feel our physical expressions to be ephemeral, powerless, invisible. If we are not on the internet, we think we are not really present at all.

And – however our merciful God might redeem our terrible choices – there’s something very, very wrong about that.

(Emphasis mine)


I’ve been chewing on what feels like a bunch of related ideas around the central theme of Being Human. This quote is one of them, and it feels relevant today as I’m thinking about a computer program micking human beings well enough to freak a bunch of people out.

ChatGPT

Played around with ChatGPT a bit this weekend, which is currently at the peak of some kind of hype cycle.

My first reaction is that I really don’t like it, though a lot of it is impressive. I’ve been trying to figure out why my feelings are so negative. I haven’t been able to get to the bottom of it, but I thought I’d jot down some notes; maybe this will coalesce in the future, or just serve as an embarassing misjudgment. Time will tell.

  • The interface (a back-and-forth chat style) is impressive, and sets certain expectations. It feels like the bar isn’t too high when you’re casually chatting back and forth.
  • The responses feel mimetic, and lacking in confidence. Slightly unnatural in that sense you get when you’re talking to an eager young person trying to impress you with their knowledge. And it’s not that they’re entirely ignorant, but they’re overdoing the details in a way a true expert wouldn’t. Almost like it’s trying to impress.
  • The code generation is impressive, and I keep seeing people writing about how this is the end of programming, but writing code is a tiny part of what it means to be a programmer. I suppose if your job is to take a very detailed specification and turn it into code, no questions asked, maybe this is closer to your day-to-day work? I haven’t seen it do anything particuarly impressive yet. But, it is really good at generating boilerplate. Which, again, is a miniscule fraction of my work.
  • I keep hearing, “Just wait til we see what this does in 5 years!” I’m not sure why we should have a lot of expectation that it will be much better. These models have mostly improved by adding additional layers and additional source text. At some point you start to run out of useful source text. And at some point more layers aren’t going to be that helpful. Are we remotely close to that? I don’t know.
  • The prose that it writes feels so bland to me. There’s no voice there. It’s almost like if you averaged the voices of all the English writers together.
  • Sometimes it can be fun, no doubt.
  • I don’t like the fact that these models are all trained on public data, then used as for-profit tools. I wonder if copyright law will ever catch up here.
  • Clearly this model has been fed a lot of fan fiction. A lot.
  • Being able to type out queries in plain English is certainly valuable, and would open up this knowledge to many people who might not otherwise have an easy time with a generic search engine (I guess?). But, it’s really too bad that it doesn’t point you to any kind of original source as it gives you answers.
  • How do you keep one of these models up to date with new information?
A fog bank lies below the mountains surrounding Mono Lake in California.
A fog bank lies below the mountains surrounding Mono Lake in California.
Date Taken
November 23, 2022
Camera
SONY ILCE-7RM4A
Lens
FE 35mm F1.8
Focal Length
35.0 mm
Shutter Speed
1/800
Aperture
f/10.0
ISO
100
Sunset lights up the snow-covered peaks south of Mammoth Lakes, California.
Sunset lights up the snow-covered peaks south of Mammoth Lakes, California.
Date Taken
November 21, 2022
Camera
SONY ILCE-7RM4A
Lens
FE 85mm F1.8
Focal Length
85.0 mm
Shutter Speed
1/160
Aperture
f/9.0
ISO
100

How to use a Github Personal Access token for cloning a repo:

$ git clone https://ghp_YOURTOKENHERE:x-oauth-basic@github.com/Organization/My-Repo.git

Moonlight Sonata

Moonlight illuminates Tuolumne Meadows beneath a clear dark sky in Yosemite National Park.
Moonlight illuminates Tuolumne Meadows beneath a clear dark sky in Yosemite National Park.
Date Taken
October 04, 2022
Camera
SONY ILCE-7RM4A
Lens
FE 35mm F1.8
Focal Length
35.0 mm
Shutter Speed
4
Aperture
f/1.8
ISO
640

CloudFlare Tunnels And Multiple Accounts

Confirmed today that CloudFlare Tunnels supports multiple accounts on the same machine (in my case, a raspberry pi), but you need to setup each tunnel with separate auth/config/etc. i.e. Do the full cloudflared login loop for each account.

Multiple domains within a single account work fine with a single credentials setup.

It occurred to me this morning that any time you see “Smart X” you can replace “Smart” with “Ad-Infested”. Works for everything. Smart TV, Smart Fridge, Smart Watch, Smart Coffee Maker…

Bicycles over Bombs

For as long as I can remember I’ve loved computers and computer technology. Some of my earliest memories are sitting behind a glowing green or amber screen, marveling at what these machines could do.

During my teens and twenties I relished every new release, every new upgrade, every new feature (Linux! Warcraft! PDAs! Geocities! Slashdot!). Sometimes there were some rocky starts, but I believed computers made everything better! You could buy any book, immediately, and online, whenever you wanted (and then you could buy anything online)! You could meet new people, learn about everything, and laugh with your friends about dancing cats. It all seemed so hopeful, so promising; I was so optimistic.

In my early thirties I started to get less enthusiastic. More and more of the “new” stuff was getting bought up by huge corporations and either immediately shut down or neutered into a listless malaise. The frontier was being settled.

And then things really turned the corner, and suddenly every new piece of computer technology is a vessel for trading personal information for neuronic addition of some form.

And then social media came along, and destroyed the world in its wake.

These days, I find myself reading every new headline with skepticism, doubt, or dread. Cool new helpful device? Probably spying on us all or contributing to a new ecological disaster. Cool new piece of software? Probably comes at the cost of destroying something that was better so that the owner can sell more ads. New computer? More restrictive than the one that came before it.

Part of this is surely just getting older and being cynical, and perhaps a fair bit of rose-colored tint to the past. But even taking all that into consideration, it’s clear that computers have not only made the world better. In many ways, they’ve made it much worse. Powerful tools used to bring about the ends their masters desire.

At one point computers really were bicycles for the mind — tools that augmented us at a human scale. Today, too often, they are ICBMs of the mind — radical tools with intense power used to destroy us.

We need more bicycles, and fewer bombs.

One of the biggest scams has been people attributing fires to climate change when the real culprit is 100+ years of misguided fire management. Instead of actually fixing the problem with better policy, now we just blame it on climate change and throw our hands up, ensuring that the problem continues unabated.

Feels like my feeds have been drowning in pessimism this week. Deep, dark fatalism.

Got my PlayDate at the end of last week and spent the weekend working up a Blackjack game for it. This is my first time working in Lua, plus it’s a new platform, so there’s been a lot of learning.

If there’s any single takeaway so far, it’s that this platform is just so fun.

Was growing ever more frustrated by TimeMachine and mdsync slowing down my computer. Today I found Time Machine Editor, which lets you block out times that you don’t want TimeMachine to run. So now my computer isn’t going crazy for 15 minutes of every hour. Much better.

I think I finally figured out the source of one of my frustrations of setting up a complicated test context, like I’m trying to do for a project I’m currently working on, is that we spend all this effort in our codebase being sure that objects can only be created via our complicated logic paths (i.e. Service Objects, etc). But then for the test we go and manually set all this stuff up again. What I really want to do is setup my test context via the exact same code paths that will be used in production to set them up, so I get all the same side effects.

And I want the tests to be fast.