Finally got my old bookshelf speakers out of a box in the garage and set them up in my bedroom. I should have done this a long time ago!
Golden Unicorn
AirPods Pro II
I got the first generation of AirPods Pro shortly after they came out, and I’ve loved them. They’re super small and portable, and while the noise cancelling isn’t quite up to the level of my Bose QC 35s, they are so light and easy to transport that I often grab them instead.
Unfortunately, my pair have been subject to the crackle of doom for a while now, and over the last few weeks it’s gotten noticibly worse.
It being Christmas time, I opted to get myself a little gift, and bought the second generation AirPods Pro — and they’re really great!
The noise cancellation in these is so good it feels like magic. The first generation ANC was great, especially for something so small. But it’s blown away by the new AirPods. I can have my music paused in the gym and it feels nearly silent. They’re amazing.
The new volume gesture on the “stick” is almost worth the upgrade on its own. I was always annoyed to have to reach for my phone to adjust the volume, but I didn’t realize just how often I was doing that until I had an alternative. The gesture feels natural, it works every time, and it really makes the experience of wearing the AirPods feel better.
The next big test will be wearing them during a flight. Typcially, I travel with my bulky QC 35s beacuse the quieting is so good, but with these new AirPods I think I can get away with leaving them at home.
All in all, this has been a really great upgrade! Highly recommended.
My question about all this is: And then? You rush through the writing, the researching, the watching, the listening, you’re done with it, you get it behind you — and what is in front of you? Well, death, for one thing. For the main thing.
Recursive Regression
If tools like GPT result in the creation of large amounts of new content, and then that content gets ingested by the next generation of the models, at what point does that impact the quality of the new output?
Is all future output bounded by the quality of whatever was available broadly circa late 2021?
Thinking about ChatGPT Security
Imagine the attention these ML systems will attract once it becomes public that big companies/governments/individuals use them for any given task. Suddenly, there will be immense incentive to poison the datasets in subtle but impactful ways. Then we’ll see attacks that are both very direct (compromise the humans overseeing the “safety” of the system) as well as very indirect (massive content farms taking advantage of known weaknesses/limitations to poison the data pool(s)).
Imagine when someone hack’s one of these organization’s systems, then subtly alters the precedence used when ingesting information!
And then there’s figuring out how to inject the right prompts to leverage RCE vulnerabilities!
And the more “capable” they become, the bigger the attack surface!
Being Human
From this thought-provoking article by Jessica Martin (via Alan Jacobs):
in our world dominated by online representation, we feel our physical expressions to be ephemeral, powerless, invisible. If we are not on the internet, we think we are not really present at all.
And – however our merciful God might redeem our terrible choices – there’s something very, very wrong about that.
(Emphasis mine)
I’ve been chewing on what feels like a bunch of related ideas around the central theme of Being Human. This quote is one of them, and it feels relevant today as I’m thinking about a computer program micking human beings well enough to freak a bunch of people out.
ChatGPT
Played around with ChatGPT a bit this weekend, which is currently at the peak of some kind of hype cycle.
My first reaction is that I really don’t like it, though a lot of it is impressive. I’ve been trying to figure out why my feelings are so negative. I haven’t been able to get to the bottom of it, but I thought I’d jot down some notes; maybe this will coalesce in the future, or just serve as an embarassing misjudgment. Time will tell.
- The interface (a back-and-forth chat style) is impressive, and sets certain expectations. It feels like the bar isn’t too high when you’re casually chatting back and forth.
- The responses feel mimetic, and lacking in confidence. Slightly unnatural in that sense you get when you’re talking to an eager young person trying to impress you with their knowledge. And it’s not that they’re entirely ignorant, but they’re overdoing the details in a way a true expert wouldn’t. Almost like it’s trying to impress.
- The code generation is impressive, and I keep seeing people writing about how this is the end of programming, but writing code is a tiny part of what it means to be a programmer. I suppose if your job is to take a very detailed specification and turn it into code, no questions asked, maybe this is closer to your day-to-day work? I haven’t seen it do anything particuarly impressive yet. But, it is really good at generating boilerplate. Which, again, is a miniscule fraction of my work.
- I keep hearing, “Just wait til we see what this does in 5 years!” I’m not sure why we should have a lot of expectation that it will be much better. These models have mostly improved by adding additional layers and additional source text. At some point you start to run out of useful source text. And at some point more layers aren’t going to be that helpful. Are we remotely close to that? I don’t know.
- The prose that it writes feels so bland to me. There’s no voice there. It’s almost like if you averaged the voices of all the English writers together.
- Sometimes it can be fun, no doubt.
- I don’t like the fact that these models are all trained on public data, then used as for-profit tools. I wonder if copyright law will ever catch up here.
- Clearly this model has been fed a lot of fan fiction. A lot.
- Being able to type out queries in plain English is certainly valuable, and would open up this knowledge to many people who might not otherwise have an easy time with a generic search engine (I guess?). But, it’s really too bad that it doesn’t point you to any kind of original source as it gives you answers.
- How do you keep one of these models up to date with new information?
How to use a Github Personal Access token for cloning a repo:
$ git clone https://ghp_YOURTOKENHERE:x-oauth-basic@github.com/Organization/My-Repo.git
Flagstaff Arizona October 17, 2022
Camera: SONY ILCE-7RM4A
Lens: FE 85mm F1.8
Focal Length: 85.0 mm
Aperture: f/11.0
Shutter Speed: 1/30
ISO: 100
Exposure Compensation: -2
Flagstaff Arizona October 17, 2022
Camera: SONY ILCE-7RM4A
Lens: FE 35mm F1.8
Focal Length: 35.0 mm
Aperture: f/13.0
Shutter Speed: 1/1000
ISO: 100
Exposure Compensation: -2
Flagstaff Arizona October 17, 2022
Camera: SONY ILCE-7RM4A
Lens: FE 35mm F1.8
Focal Length: 35.0 mm
Aperture: f/13.0
Shutter Speed: 1/1000
ISO: 100
Exposure Compensation: -2
Flagstaff Arizona October 17, 2022
Camera: SONY ILCE-7RM4A
Lens: FE 35mm F1.8
Focal Length: 35.0 mm
Aperture: f/13.0
Shutter Speed: 1/200
ISO: 100
Exposure Compensation: -2
Flagstaff Arizona October 17, 2022
Camera: SONY ILCE-7RM4A
Lens: FE 35mm F1.8
Focal Length: 35.0 mm
Aperture: f/1.8
Shutter Speed: 1/2500
ISO: 100
Exposure Compensation: -2
Flagstaff Arizona October 17, 2022
Camera: SONY ILCE-7RM4A
Lens: FE 35mm F1.8
Focal Length: 35.0 mm
Aperture: f/13.0
Shutter Speed: 1/640
ISO: 100
Exposure Compensation: -2
Flagstaff Arizona October 17, 2022
Camera: SONY ILCE-7RM4A
Lens: FE 35mm F1.8
Focal Length: 35.0 mm
Aperture: f/13.0
Shutter Speed: 1/500
ISO: 100
Exposure Compensation: -2
Flagstaff Arizona October 17, 2022
Camera: SONY ILCE-7RM4A
Lens: FE 35mm F1.8
Focal Length: 35.0 mm
Aperture: f/13.0
Shutter Speed: 1/320
ISO: 100
Exposure Compensation: -2
Flagstaff Arizona October 17, 2022
Camera: SONY ILCE-7RM4A
Lens: FE 35mm F1.8
Focal Length: 35.0 mm
Aperture: f/13.0
Shutter Speed: 1/640
ISO: 100
Exposure Compensation: -2
Flagstaff Arizona October 17, 2022
Camera: SONY ILCE-7RM4A
Lens: FE 35mm F1.8
Focal Length: 35.0 mm
Aperture: f/16.0
Shutter Speed: 1/250
ISO: 100
Exposure Compensation: -2
Flagstaff Arizona October 17, 2022
Camera: SONY ILCE-7RM4A
Lens: FE 35mm F1.8
Focal Length: 35.0 mm
Aperture: f/13.0
Shutter Speed: 1/200
ISO: 100
Exposure Compensation: -2
Flagstaff Arizona October 17, 2022
Camera: SONY ILCE-7RM4A
Lens: FE 35mm F1.8
Focal Length: 35.0 mm
Aperture: f/13.0
Shutter Speed: 1/500
ISO: 100
Exposure Compensation: -2
Flagstaff Arizona October 17, 2022
Camera: SONY ILCE-7RM4A
Lens: FE 35mm F1.8
Focal Length: 35.0 mm
Aperture: f/20.0
Shutter Speed: 1/160
ISO: 100
Exposure Compensation: -2
Flagstaff Arizona October 17, 2022
Camera: SONY ILCE-7RM4A
Lens: FE 35mm F1.8
Focal Length: 35.0 mm
Aperture: f/13.0
Shutter Speed: 1/800
ISO: 100
Exposure Compensation: -2