There Is No Cat

The alternative to flowers!

Wednesday, April 24, 2024

Chocolate Tuba

A couple of years ago, I registered a domain name and put up a placeholder that consisted of the site name and a photograph that illustrated it. Now that I’m not working, I took some time and actually built the site.

Chocolate Tuba is about music that crosses cultures, combines disparate genres. Two things that are great individually, but you would not expect to see them put together. And yet, they can be surprisingly delightful!

The idea here is that music that combines genres unexpectedly is like a chocolate tuba.

I built the site using Eleventy, a static site generator. My first attempt a couple of years ago was with Gatsby, but once I saw the code it generated, I discarded it. There was a lot of crud that I didn't write that was necessary for the site to operate. I didn't like that. Eleventy doesn't insist on using a particular Javascript framework like React; in fact, you don't have to use Javascript at all. I have a bit of Javascript in the build process, but the output doesn't include any all, because it simply isn't necessary. All I'm doing here is writing a little text and including an embedded YouTube video (using the no-cookies approach to embedding) of music that fits the criteria established, that the music combine two approaches or genres or cultures in a way that is unexpected and that works.

Yesterday I saw a post shared a few times on social media about how the web used to be "fun and punk and ~weird~"". Now that I have some free time on my hands, maybe I can help bring a little more fun, punk, ~weirdness~ back to the web.

Posted at 6:35 PM
Link to this entry || No comments (yet) || Trackbacks (0)

Tuesday, April 2, 2024

Social Media Holiday

I took the last month off from my Twitter subsitute social media networks, Mastodon and BlueSky.

It was wonderful.

These firehose-of-posts networks serve as distraction engines, sucking up my attention and making it hard for me to focus on anything else.

Social media has been part of my life for basically my entire adult life, going back to my early 20s when I spent all my time on Usenet.

It was useful to get away from it. I found myself being more present in my daily life. I took an improv class, and I didn't tell the entire world about it as it was happening.

I'm dipping my toes back in now. I don't feel like I can let myself dive in head first and go back to how I used to use these networks, though. BlueSky, in particular, annoys the fuck out of me with how its clients work, taking me to the most recent post and then forcing me to remember where I was before. That's a basic violation of the user experience need to minimize cognitive load, and it makes me want to use it less. For Mastodon, using Ivory as my client means that I can catch up at my leisure. I intend to take it up on that, only checking in a few times a day instead of switching to the program every few minutes to read the most recent three or four posts.

Social media is made up of people, and if I follow you, there's a reason I follow you. So I don't want to lose track of that. But I can't deal with the constant theft of attention. It's like having a hyperactive hyena baying in your ear every few minutes. I can't deal with that right now. Either I make social media work on my own terms moving forward, or I have to walk away.

Posted at 12:04 PM
Link to this entry || No comments (yet) || Trackbacks (0)

Wednesday, December 6, 2023

Buying a Digital Camera

I am buying a digital camera.

This is not a big deal for most people. It’s the default these days. You buy a camera, it’s digital.

The last serious digital camera I bought was a Canon Digital Rebel. It was 2004. I bought it because I thought that if I shot more, I could get better at photography, and I could shoot more with a digital camera than I could afford to with a film camera. The camera cost me about $1000 if I recall correctly. What I found with the camera was that I was shooting a lot more, but my photographs were getting worse. I would spray and pray, which is to say, take a lot of nearly identical images and hope one of them worked. Very few of them did. This was highlighted on a trip we took to Florida to visit my parents. We spent a day at the Kennedy Space Center. I brought my Digital Rebel, and also brought this cheap plastic camera I had found in a thrift shop in Levittown, Pennsylvania, for a dollar, a (1960s vintage original) Diana. This was before Lomography came out with their version of the camera. The battery in the Digital Rebel died after three pictures, so I was limited to just using the Diana for our visit to this iconic location. Weeks later, when I got the film developed, I was awestruck by the photos. The heyday of the space program was the 1960s, and the photos I took looked like they could have come straight from that era. I was more impressed by the photos I took with my one dollar camera than with anything that had come out of my thousand dollar camera. From that point on, I shifted back to film.

I shot for the first few years on toy cameras like the Diana. I had a Holga, a Fujipet, an Agfa Clack, a Superheadz Blackbird Fly fake TLR, and some of the goofy cameras coming out from Lomography. I also got a Lomo LC-A. Lomography’s slogan “don’t think, just shoot” worked for a little while, until it didn’t. I found myself slowing down and taking photographs more intentionally. I started to get into Soviet cameras like the Kiev 88cm and Kiev rangefinders. I got back into Polaroid, starting with my dad’s old 250 that shot peel-apart film, and getting a succession of SX-70s. The Kiev rangefinders led to me getting a couple of Contax rangefinders, one pre-war that formed the basis of the Kiev camera, and one post-war, which was the West German attempt to recreate the cameras that had been spirited away by the Soviets to Kiev. The Kiev 88 got me into more medium format cameras; we bought a Rolleiflex after seeing a documentary about Vivian Maier, and a friend gave me a Pentax 67 he wasn’t using. When New55 had their first Kickstarter, my love of Polaroid, which dated to my childhood, led me to getting into large format, first 4x5 with a Calumet CC-401, then a succession of other cameras, including a Pacemaker Speed Graphic, several Graflex SLRs including two RB Super Ds, an Intrepid 4x5, a Wanderlust Travelwide, a 5x7 Century No. 5 studio camera, and even an Intrepid 8x10 when I found a Polaroid 8x10 processor for a very good price and needed a camera to shoot that film with. I slowed way down, shot a lot less, and found my photography slowly improving.

20 years on, I found myself wondering if these changes in how I shoot would make me work differently with a digital camera. I’ve had my eye on the Fujifilm GFX 100s for the past year. All the reviews I read about it mentioned that it didn’t work for people who had a need for speed, but if you were slower and more intentional, it was a great choice. Still, $6000 for a camera? That’s way more than I’ve spent on any camera ever. Probably the most expensive cameras I’ve bought were the Graflex RB Super Ds, which I got for a steal at $500 each (one in working condition typically goes for about $1800). My Contax rangefinders, which were comparable to Leicas back in the day, went for about $225 each. For Black Friday, the price dropped significantly, down to $4400. I considered it; Laura offered to get it for me as a combined 60th birthday / 20th anniversary / Christmas present. Okay, honey, thank you sooooooo much.

The process of getting it has been a pain. I still don’t have it. I ordered the camera a week and a half ago from B&H on Black Friday. They shipped it that day, via FedEx. It got from their warehouse in Florence, New Jersey, to FedEx Newark by Friday evening, then disappeared. It was supposed to be delivered on Monday, but it never showed up. I’ve been fighting with the two companies to get them to replace the stuff I ordered, and B&H finally said they would yesterday, but they still haven’t shipped the replacement. It’s been a real pain in the ass trying to get this camera in my hands. Hopefully it ships today and I’ll have it tomorrow.

I look forward to seeing how the camera handles when I finally get it, and how I integrate it into my photography. I’ve seen film photographers who work with digital seamlesses with film, and I’ve seen others who get seduced by the ease and stop shooting film. I hope I’ll be the former.

I’ve dived into YouTube videos about digital photography in the past week and a half. I’ve been out of the loop on digital photography for a long time. It’s interesting how much work people put into it to make their photographs look like they were shot on film. The Fujifilm cameras lean into this with film simulations, and there are videos out there showing, for example, just how close their simulation of Fuji Acros film comes to the results actually shot on Acros with a film camera. I don’t know, there are an awful lot of black and white film stocks that aren’t Acros that I love to shoot, and I’m not sure that imitating them digitally is where I want to go. But for color work, it’ll be interesting. There are some things that are hard to do reliably with film that I want to try with the GFX 100s. I’m also considering ways to dirty the output of the GFX up using things like pinhole lenses. It would be fun to see if I can set the ISO high enough to make handheld pinhole snapshots with the camera.

It feels a little weird to be getting a digital camera. So much of my identity as a photographer for the past 20 years or so has been that of someone who was completely devoted to film. But I could use a new challenge. This is an experiment for me, just to see if I control the camera or it controls me. It’ll be interesting to see the results. If it doesn’t work, I could sell the camera and get that Deardorff 8x10 camera I’ve had my eye on....

Posted at 6:07 PM
Link to this entry || No comments (yet) || Trackbacks (0)

Wednesday, November 15, 2023

Failing to reproduce Polaroid Chocolate film

If you know the details of how Polaroid's long lost Chocolate peel apart film worked, you know that it was a combination of a color negative with black and white chemistry. My understanding is that the original use of the process was by people using 8x10 peel apart film manually combining the required pieces. I shoot 8x10 Polaroid, but the process now is derivative of the integral process, like the iconic SX-70 pictures. I figured I should see if I could recreate that film with the integral process instead of peel apart.

I failed.

Blank frame

For my first attempt, I should have read up on things, because I misremembered and tried using a black and white negative with color chemistry. That didn't work at all. I got a blank white frame (mostly; some of the chemistry didn't spread, so there's a brown blob in the upper right corner). Oops.

But that left me with the required materials to do the experiment correctly, with a color negative and black and white chemistry.

This also failed.

Blank frame

I got a picture, kind of, and as the print has aged over the past several hours, the colors that were present have migrated to a more brown look, but most of the photograph didn't develop at all. If you look closely you can kind of almost imagine what's there, but no, it's not a success.

So, for science, and to save anyone else from the trouble and expense, the new integral-based 8x10 Polaroid does not have the ability to create the classic look of Chocolate film.

It only cost me two sheets of 8x10 to find out.

Posted at 6:41 PM
Link to this entry || No comments (yet) || Trackbacks (0)

Saturday, June 3, 2023

30 Years

In my office, I have an old Macintosh SE/30, the first Mac I owned myself (I used my dad’s 128 when he brought it home, which we upgraded to at Fat Mac and eventually to a Mac Plus). On the hard drive is an HTML file with a “last modified” date of June 3, 1993. That’s 30 years ago today.

I’ve been a web developer for 30 years.

When I started, there were about 70 web sites in the world. I think there were more gopherspaces than web sites at that point. I had a Sun workstation on my desk at work at the time, so when the NCSA at the University of Illinois released XMosaic at the end of April, I downloaded it and was able to browse the web the way we think of it today. More importantly, I was able to use the “View Source” menu item and look at how these pages were produced. What I found was quite similar to what I worked with every day, which was troff macros for formatting books. The troff tag .P was an exact match for the HTML P tag, for example. I’m sure I starting writing my own stuff up pretty quickly, but today is the anniversary of the earliest day I have absolute proof for. So today I mark 30 years as a web developer. I’m sure I had come across the web before that; Ed Krol came out with a book in September, 1992, called “The Whole Internet User’s Guide and Catalog”, and I got a copy of it very early on from a bookstore in Philadelphia. Internet books did not show up in your local Barnes & Noble at the time. That book had a chapter in it about the web, and I know I spent some time using the old line mode client before XMosaic came out, but it was XMosaic that really crystallized everything. When I was a broadcasting major in college, one of my teachers, who was an executive at the university-owned TV station where I was working, told us that they expected an upcoming convergence of broadcasting, computers, and publishing. At the time, in the mid 1980s, they thought it was going to be Teletext, which was transmitted in the vertical blanking interval of TV signals. When I saw XMosaic, I recognized it as the thing that was actually going to usher in that convergence.

I remember showing my boss the web, saying that this was the future. He nodded and forgot about it. In October, when NCSA released Mosaic for Mac and Windows, another one of my co-workers showed it to him and he asked if I’d heard of it. I told him I had demoed it for him back in May. I was working as a technical writer at the time, and we started discussing the possibility of licensing Mosaic to include with the programs we were documenting as an online help system. The thing that made us decide not to was that web browsers didn’t support tables at that point, something that anyone who worked as a web developer in the late 1990s might find astonishing, as tables were the hackish way we did layouts starting in the mid-1990s (before that there was really no way at all of controlling page layout other than using header tags for typography).

Within a few months, I had transitioned from being a tech writer to making web sites full time.

In 1996, the company I worked for, AT&T, split in two, and I wound up going with the part of the company that made hardware, Lucent Technologies. I was part of a three person team that worked on the Bell Labs web site. A lot of the work was repetitive, so I tried to automate the most repetitive parts. I created a little content management system that would help me turn around press releases very quickly. It was only a local thing on my own computer rather than something I built on the server. I used a piece of middleware called Tango and ran WebSTAR as the server on my local Mac. Tango talked to FileMaker Pro and generated the pages in the format I needed, which I would then upload to the Bell Labs site. I could post a press release much more quickly than the team that ran the main corporate site. But I would up getting laid off when Lucent ran into problems when the first Internet bubble burst. I went off to work for a startup for a year, which was a long story and not a positive thing, and wound up back at Lucent a year later as a contractor, working on both the Bell Labs site, which still had most of my code but with a new look and feel, and on the main corporate site. I did that for a few more years until Lucent was swallowed by Alcatel and my services were no longer required. That was in 2008. I saw that layoff coming, and had put some money away so I could take my time and figure out what I wanted to do.

What I wanted to do was work for an agency. I had seen so many instances where agencies were assigned all the interesting projects, and the in-house developers were left with the less interesting things. I wanted to work on the interesting things. Thanks to the efforts of a headhunter I worked with, I got a job at a small agency in NYC. I’ve been in agencyland ever since, a couple of months at the first place, then 2 years at a much larger agency that was part of a traditional advertising firm, then 13 years at one of the original digital agencies that spun up in the mid 1990s when the Internet caught fire, and now with a startup agency with some of my friends who also left that digital agency recently. One of the best things about moving to an agency was that I now worked for people who knew what I did. My direct supervisors actually did the same thing I did. That was not the case when I worked in industry, and it was the cause of much frustration and ultimately of layoffs because the people I worked for didn’t know what I did.

When I discovered the web in the early 1990s, the head of the group we were part of was a brilliant computer scientist named Jim Kutsch. He was blind. He had invented a talking terminal in the 1970s for his PhD thesis that connected a speech synthesizer to a VT-100 terminal (note that this is years before the first software commonly recognized as a screen reader). He was still using it when I worked for him in the early 1990s. From the very beginning of my career, I recognized accessiblity as a potential boon and potential problem for the web. At the time, really the only thing you could do to make a site inaccessible was to not include text in the alt attribute of images. Every accessibility professional I know is groaning as they read that, because 30 years on, that’s still the most common accessibility issue. I have talked peoples’ ears off about accessibilty for my entire career. For the first 20-25 years, they nodded their heads and then went on with their business. In the last 5-10 years, things have changed. That digital agency I spent 13 years working for got sued and that drove accessibility to the top of their agenda. Still, it’s not easy, but in the past couple of years, I see signs that accessibility is really going mainstream. Libraries and frameworks like React did not originally concern themselves with accessibility. But looking at component libraries for a new project at my new job in recent weeks, I was surprised and gratified to see that a hefty percentage of the libraries I used touted the work they had done to make their components accessible. That’s a sea change. We may finally be getting back to a point where it’s possible to make a site accessible by mistake or without intention as opposed to the standard approach for the past 30 years of making a site inaccessible by mistake or without intention. I say getting back to a point because the original specifications for HTML did not include the img tag, and the web was pretty much inherently accessible until that tag was introduced.

Web development has changed dramatically over the years. When I left Lucent for the final time, I was sick and tired of working on sites that didn’t understand semantic markup, SEO, accessibility, etc. I had resisted working in NYC because of the lengthy commute, but at that point, it seemed like the only place I could find work with like-minded developers was in the city. Web development standards were changing, thanks to the efforts of things like the Web Standards Project that drove browser makers to end the browser wars, and the development of the HTML 5 standard, which is like a million pages long mainly because it describes exactly how every browser has to behave in the face of broken code, so that all browsers work the same. But the distribution of that knowledge and those practices was uneven, and hadn’t made it to the suburbs. So I went to the city to work. I would say about ten years ago, things really started to shift. Young developers may have never encountered a layout done entirely in tables at this point. There were a few years there where semantic markup and understanding what the point of new tags like article is was the hot thing. And then we moved on and that was forgotten in the excitement of confusing new approaches to web development and frameworks like Angular and React.

About five years ago I went to the staffing person at our agency and asked to be put on a project that used React, because it was clear I would need to learn it in order to stay relevant as a developer (I had been working with Angular and really didn’t want to keep going in that direction). One thing that’s been constant through the years is that as web development has changed, with every change there’s a changing of the guard to some extent. I often run into people who tell me they used to be developers. And it doesn’t surprise me. When CSS became a thing, some developers decided that the easiest course as the entire approach to developing sites changed was to move into a different part of the business. When Javascript started becoming much more important, again, some people hived off. At certain points, this coincided with a switch from the old webmaster era, where one person could understand everything they needed to know to create a web site, to what we have today where creating web sites is complex enough that it warrants having people for every specialty; SEO, user experience, visual design, project management, etc. All of these are fields that my friends who used to be developers wound up moving into instead.

Getting back to React. I’m not a huge fan. I describe React this way: Imagine that the web is a red rubber ball. Red rubber balls have a seam. If you take a sharp knife and cut half of that seam, then reach in with your fingers and flip the ball inside-out, that’s React. For a few years there, all the computer scientists flooding into web development talked about Model-View-Controller as the proper way to structure a web application. I agree. My favorite MVC framework for the web is... the web. You have this language that you use to model your data. It has limited semantics, but you can extend it to some extent (use proper HTML tags to describe the data and extend with classes and IDs). Then there’s a language that describes the view (CSS). And you have Javascript to be the controller. And most importantly, you don’t mix the three. It always amused me that people who insisted their Javascript apps follow MVC didn’t get the irony that they were violating MVC at the most basic level.

Of course, React isn’t even an MVC framework; its proponents say it’s at best the V part. But the move to putting everything in Javascript seems insane to me anyway. The web is a three legged stool. You have HTML as the first leg. If you get your HTML wrong, there’s a million page specification to ensure that the browser deals with it in the same way and you probably get what you intended anyway. It fails gracefully. CSS is the second leg. If you get your CSS wrong, browsers will ignore the part that’s wrong and just render everything else. It fails gracefully. Javascript is the third leg. If you get your Javascript wrong, the code fails completely and refuses to run. So which of these three legs are you going to commit your entire site to?

One of the bad effects of React is that a lot of good ideas about web site development went by the wayside. One in particular is progressive enhancement, the idea that you build a web site that works no matter what, then use more advanced technologies like Javascript and some of the more recent additions to CSS to make the experience better for users with browsers that support them. This approach ensured that search engines could index your site, and that errors in your Javascript wouldn’t prevent users from completing their tasks. React and other similar frameworks just kind of overwhelmed that.

That said, there seems to be something of a backlash brewing. On my current project at my new job, we decided to use a server side framework called Remix that takes code written in React and runs it on the server, then hydrates the pages so generated so they work like a single page app like a standard React approach. It’s similar to Next.JS in that sense. Remix does a lot of work behind the scenes to ensure that sites you build with it work even if the Javascript on the browser side fails, which is to say, progressive enhancement (they actually use that term on their site). I actually feel like I can use all the knowledge I’ve built over the past few years about how to build React sites to build sites that work the way they should instead of the way the industry has collectively hallucinated that we should for the past several years.

It has been an interesting 30 years. It’s been a wild roller coaster ride, with lots of ups and downs, periods of unemployment and periods of making a lot of money, periods where we get better at what we do and periods where we forget all the lessons we’ve learned over the years. It’s been a hell of a ride. I’m not going to be doing this for many more years; I was almost 30 when I started making web sites and I’m almost 60 now. But it’s been an awful lot of fun. Things are changing again. Things like ChatGPT and GitHub Cockpit are amazing. I have occasionally used ChatGPT to help write code on a couple of personal projects like the Instagram and Twitter archive sites I recently created. It is a mixed blessing. It rarely gets code right on the first try. Cockpit is somewhat better. It’s like a really smart autocomplete most of the time. And you can really work it out by writing your code as comments and see what it comes up with. Again, it’s not always right and it’s not perfect, but it’s impressive. And I think it makes me a faster coder. It’ll be interesting to see if people starting as developers today can post articles in 30 years about their experience as developers or if the machine learning bots get better enough to take over. I’m glad I won’t be doing this so much longer that this becomes an issue for me.

One of the most interesting aspects of web development has been the low barrier to entry. At the beginning, it didn’t require a computer science degree to create a web site. In a lot of ways, the professionalization of web development has been the counterattack of the computer scientists and the move to Javascript as the primary way to build a site professionally a way of installing a gatekeeping function. But it is still possible to create a web site the old fashioned way, with HTML and CSS and a server somewhere for a few dollars per month. Browsers still understand those. And if View Source is less useful than it was 30 years ago because of obfuscation and minimization, at least there are web sites out there that explain how to build a web site. The best thing about this industry has been people’s willingness to share what they know. I hope that never changes.

Posted at 7:13 PM
Link to this entry || No comments (yet) || Trackbacks (0)

Thursday, May 18, 2023

I Added a Twitter Archive site to My Instagram Archive site

I started using Twitter at SXSW in March, 2007, because all my friends there were using it to coordinate what they were doing that week. After the week was over, I found it was a great way to keep in touch with the people I met there. I was no stranger to online social media; I started using Usenet 20 years earlier, in 1987, when I started working at AT&T and got access to the nascent net. I met my wife on Usenet in 1990. So while I initially resisted Twitter, SXSW turned me around on it, and I quickly became an avid participant.

And I continued to be an avid participant for most of the next 16 years.

By the end, I was getting a little tired of it. I had accumulated a list of people I was following that was slightly too large for me to keep up with, and it started to feel like the tail was starting to wag the dog. So when Racist Spice bought the company, it was like he did me a favor in giving me the opportunity to burn it to the ground and start over somewhere else.

I could characterize the people I followed on Twitter into a few categories. There were my initial follows, the people I met at web conferences like SXSW over the years; that overlapped to a large extent with people I knew in previous years from the early days of blogging, so I kind of treat them together. Then there were the journalists and political posters I started to follow at some point. Some of them were prolific posters, and a few posted so often that I eventually had to unfollow them just to keep from feeling overwhelmed. More recently, two other communities that I started to follow were film photographers and experts on Ukraine. It was a great way to keep up with what was going on in the world.

A few years ago, I set up an account on Mastodon. I participated sporadically. When Twitter was set on fire, I moved the effort I had been putting into Twitter over there. Of the four communities I mentioned, the photographers made the most effort to move to Mastodon, so the majority of the people I follow there are photographers. A few of my OG follows from blogging and conferences and web stuff have moved, but really not that many. Journalists and politicos have largely stayed on Twitter, although there are a few who have jumped into Mastodon with both feet. And Ukraine? With one exception, a guy in Canada who posts a lot of translated stuff from the Ukrainian armed forces, none of them moved. I miss the people who haven't moved. I tried to read Twitter sporadically after I stopped posting there, particularly through lists made up of the communities I mentioned, but when Twitter turned off API access for third party clients like Tweetbot and Twitterific, I stopped. Having to read Twitter through their own site is a freaking nightmare. I don't know how anyone puts up with their terrible interface.

When I stopped posting to Twitter, I downloaded the archive of my posts that they offered. Much like Instagram, they have a lot of information about you, but don't share the stuff that other people have created in response. So the archive lacks most of the context. They do include the number of retweets and likes each post got, which Instagram doesn't include, but nothing about who did them. There is a bit of context in that quote tweets are identifiable by the fact that they end with a link to the original tweet, and that reply tweets include a link to the tweet they're replying to. So that's something, and it's better than Instagram's petulant insistance that they own the community aspects of your presence. One other thing that's nice is that for shortened URLs, they include the original URL in the data, so you don't have to contact Twitter's services to decode them.

Prominent members of the web dev community that I've folllowed over the years have always made the point that you should post your content on your own sites. In that spirit, and in the understanding that Twitter may not continue to exist in its current form forever and all that effort would be lost, you can see all my posts there at

Posted at 1:00 PM
Link to this entry || No comments (yet) || Trackbacks (0)

Monday, May 8, 2023

Things I Learned Making a Site to Archive My Instagram Posts

Instagram offers users the ability to download a ZIP file that contains a lot of information about your account. It has all your posts. It has all the comments you’ve made. It has a record of every post you’ve liked. It has a record of every thing you’ve bought, every ad you clicked on, what they’ve figured out about you because of your behavior on the app. It’s interesting and a little disturbing.

I started on Instagram the first hour they were open to the public. My friend Dan Rubin had been a beta tester and was linking to his posts there during the beta period in his tweets, so I knew it was coming and was looking forward to joining. I got in really early; my member number is 2529. For a long time, it was fun, but over time, it became less so, to the point where I just didn’t enjoy posting there any more. So I stopped. I work in advertising, and I’m all too aware of how we track people online and sell what we learn to advertisers. I believe this practice, surveillance capitalism, is a danger to democracy and our way of life, and I just don’t want to participate in it any more. I miss the friends I made there who haven’t moved on the way I have, but life is short and participating in things I don’t enjoy any more is a non-starter. So fuck Instagram. I post my photos on sites where I pay for the hosting, which means that I’m the customer, so the people running the service have the incentive to make me, their customer happy, instead of a the way a “free” service like Instagram has the incentive to make their customers, the advertisers, happy.

So anyway, Instagram has all this information, and they’ve been shamed into letting you have a copy of it, because after all, you created it. They don’t have to provide it in a format that’s easy for you to understand, mind you. If you open up the files in the archive, you’ll find a lot of brackets and quotation marks and stuff that wouldn’t make sense to a civilian.

Fortunately, I’m a web developer. The brackets and quotation marks and stuff is a format called JSON (JavaScript Object Notation), and it’s designed to be easy to read by a web app. So I wrote a web app.

Here’s some things I figured out about the data they provide (and maybe a few other things).

  1. They provide you with the set of everything you liked. Every post. Not the post itself, mind, you, just the fact that you liked a post with this ID. You generated it, therefore you own it. But it’s missing the context; you liked this post, but that post belongs to someone else, the person (or corporation) that created it, so it’s not in your data file.
  2. The flip side of this is that all the likes that your posts generated belong to the people who created them, so there’s no record of who liked your posts or how many likes they got. If that’s important to you, you need to access the app. I assume there’s a data table somewhere that records all the likes on a post and they’re associated through a SQL JOIN command or some equivalent (basically a way of associating data held in different places with each other), because generating the list of likes for each post by looking at every individual’s list of likes would get very costly. But those likes don’t belong to you, so you don’t get them.
  3. They provide you with every comment you’ve ever made. But (and you can see this coming), they don’t provide the context. Again, you get what you generated, and nothing anyone else generated (except for stuff like the shit they figured out about you, whether it’s right or not). Some subset of the comments you made are likely to be on your own posts, but again, without context, you lack the data to make sense of them or fully reconstruct what’s on the site.

It’s interesting. It’s like “We heard you, you want a copy of everything you’ve created on our site, and we’re going to give it to you good and hard”....

So if you want to create an archive of your Instagram posts, you have to understand that it’s not going to be a complete copy of what’s on Instagram itself.

In many ways, the chase for likes on Instagram is part of what makes it such a sick place to be, so I don’t miss them. I’m probably wouldn’t include them in my archive even if they were available. The missing context for comments is a little harder to accept, but it is what it is. The cudgel that Instagram uses to keep people coming is the community; if they give that community away, they lose their hold on you, you drift away, and now they can’t sell your eyeballs.

The archive that I created is at It has all of my pictures, with none of the comments or likes. I created it as a way to evaluate different web frameworks we were looking at for an upcoming project at my new job. So I basically wrote the thing three times, and the one I liked best is the one I published. I’ll write another post focused on that experience.

Posted at 11:05 AM
Link to this entry || No comments (yet) || Trackbacks (0)

Thursday, December 22, 2022

I used to go to record stores

I used to go to record stores. I used to go to record stores a lot. We had some really great record stores thirty-five years ago. Here on the Jersey shore, we had Jack’s in Red Bank, where a friend slipped me a promo cassette of a compilation by New Zealand’s great Tall Dwarfs, sending me on a journey of discovery (I was already into bands on Flying Nun but somehow hadn’t discovered Tall Dwarfs yet); they also had cassettes on the Xpressway label from New Zealand that were incredibly rare. We had Vintage Vinyl in Fords and in their brief foray in Eatontown. In Hoboken, Pier Platters was amazing for anyone into bands like the dB’s, the Bongos, the Individuals, or any of a dozen other Hoboken bands. Princeton Record Exchange in Princeton was another great store. In State College, where I spent my college years, we had Arboria, a great source for cheap used records and European pressings of hard-to-find bands like the Soft Boys, and City Lights, where I found the first single by Yo La Tengo before anyone ever heard of them. In the city, Tower, of course, and right around the corner, Other Music. I couldn’t walk into either of those places and get out for less than a hundred bucks. Other Music in particular was unbelievable for the obscure stuff you could get there and couldn’t get anywhere else. Jack’s and PRX still exist, but the others are all gone. And even if they weren’t, I probably wouldn’t go there any more. Because record stores don’t have what I want any more.

The Internet ruined record stores, like it ruined book stores and so many other things in life. Things that you had to dig through cartons to find or write away to the other side of the world for were now just a search box away. It took a lot of the joy out of going to record stores. But the other thing was that places like Amazon took the profit out of record stores. Record Store Day was the final dagger. In an attempt to reclaim some of the custom that had fled to Amazon, RSD refocused record stores from bringing in young customers to bringing in older customers to repurchase music they had been listening to for decades, in different or expanded formats. It kept a number of record stores alive while destroying the thing that made me want to go there, the ability to find something I hadn’t heard before. By turning to things we had all heard before, they gave up their soul in exchange for continued life. RSD is all about reissues and live recordings of “legacy” artists, not about new music. If you want to find new music, you have to go elsewhere. I’m old, and by rights I should be listening to the same stuff I listened to when I was 18, but fuck that, if I learned one thing from listening to John Peel on the BBC World Service, it was that I want to hear something I never heard before, not the same shit over and over.

Of all the elsewheres to go, my favorite is Bandcamp. The great thing about Bandcamp is that they fill the gap left by records stores, but also the gap left by music magazines and fanzines, both of which were also destroyed by the Internet. Bandcamp Daily and other articles they post take advantage of the net’s ability to incorporate multimedia into pages, so instead of just telling you about a new band and their songs, they can let you hear them, something that zines were limited in doing (some zines included flexidiscs or companion CDs, but that was a limited number and far from standard). On Bandcamp, I can listen to the music before buying it, which is a nice addition to what I used to do with record stores when I would buy a record based on what the cover looked like.

The streaming services like Spotify and Apple Music have their places. I subscribed to Spotify for several years, then switched to Apple Music when Spotify started funding anti-vaxxers during the pandemic. I find them useful mainly for playing music I have on vinyl but haven’t digitized yet, and for a game I play where I play a song I know and then look through the related artists to find something else to play next. I’ve found a few new artists that way, but nowhere near as many as I find on Bandcamp.

If I look at where I buy music nowadays, I would say it’s probably 85% on Bandcamp, 5% on Amazon, 5% at record stores, and 5% from Apple Music/iTunes Music Store. I miss record stores, but the record stores I miss are never coming back.

Posted at 3:23 AM
Link to this entry || No comments (yet) || Trackbacks (0)


This site is copyright © 2002-2024, Ralph Brandi.

What do you mean there is no cat?

"You see, wire telegraph is a kind of a very, very long cat. You pull his tail in New York and his head is meowing in Los Angeles. Do you understand this? And radio operates exactly the same way: you send signals here, they receive them there. The only difference is that there is no cat."

- Albert Einstein, explaining radio

There used to be a cat

[ photo of Mischief, a black and white cat ]

Mischief, 1988 - December 20, 2003

[ photo of Sylvester, a black and white cat ]

Sylvester (the Dorito Fiend), who died at Thanksgiving, 2000.


This site is powered by Missouri. Show me!

Valid XHTML 1.0!

Valid CSS!

XML RSS feed

Read Me via Atom

new host


Home Page
Flickr Photostream
Instagram Archive
Twitter Archive

There Is No Cat is a photo Ralph Brandi joint.



Family Blogs

Jersey Girl Dance
Mime Is Money

Blogs I Read

2020 Hindsight
Apartment Therapy
Assorted Nonsense
Backup Brain
Chocolate and Vodka
Creative Tech Writer
Critical Distance
Daily Kos
Dan Misener likes the radio
Daring Fireball
Design Your Life
Doc Searls
Edith Frost
Elegant Hack
Emergency Weblog
Empty Bottle
Five Acres with a View
Flashes of Panic
Future of Radio
Groundhog Day
Hello Mary Lu
Jeffrey Zeldman Presents
Jersey Beat
John Gushue ... Dot Dot Dot
john peel every day
JOHO The Blog
Kathryn Cramer
Kimberly Blessing
La Emisora de la Revolucion
mr. nice guy
oz: the blog of glenda sims
Pinkie Style
Pinkie Style Photos
Pop Culture Junk Mail
Seaweed Chronicles
Shortwave Music
Talking Points Memo
The Unheard Word
Tom Sundstrom -
WFMU's Beware of the Blog