Collages and Small Printers

Working on some collage this week using tickertape printer output and and found items. I’m starting to think that I want to use my little BT printer instead of my older adafruit printer, which would change the format of my zine, but overall could be a positive outcome. I feel that I’ll spend more time doing assemblage and less time in software, but frankly I’m fine with that. I’ll have to play around with the Phomeo printer firmware though.

So after hooking up my old Adafruit printer that I used in 2019 and porting over my code, I have come to some realizations that there is a lot of buffer overflow. And I’m starting to lean more towards using my Phomeo which I wouldn’t be able to directly translate the code I have to use with it, but I’m thinking maybe I can repurpose my old zine code to be more of an assembler that outputs a PDF I can print from the Phomeo. I’ll tinker with it. I have found some open source libraries for the phomeo, but because its bluetooth its not as reliable. One option could be outputting everything as a website, and then printing from web. Scaling could be a problem. I’m going to do some experiments.

Little phomeo bluetooth printer

Process and Output

So I’ve had some good raw generations. The process I’m doing here is a variation on the process I had in 2019. I start with a photo I’ve taken, and I feed that into a caption generator and a prompt generator (I might add a third caption generator to the list like Flickr8 but CLIP Prefix / Interrogator has been decent). I play with the settings available to me, generally temperature and top_k until I get something back that I think is interesting.

some very early raw output that is too repetitive.

I run the same prompt text several times to get back a bulk of output. I then look through that raw output and pick out a few specific lines, which I then re-feed back into GPT-Neo. Again I adjust the settings until I feel something interesting comes back. I might do this 2 or 3 times. After I comb through all the output and start editing things into small vignettes. I found that using the single line captions from clip prefix, or editing a sentence together from Interrogator worked better as GPT-Neo prompts that plunking in whole word salads.

One thing I’ve noticed is that I didn’t need as much data as I thought I would for this process. In my original write up I was going to use my journals, and planning docs, and caption output, and I still might…because I spent the time sorting them, but I have a lot of good material to work with just out of prompting GPT-Neo with captions and I’m starting to feel that including too much extra text would just dilute my process.

I also found I didn’t need as many photos as I thought I would, mostly because I’m doing a manual process vs fine-tuning or something more automated. So part of this project has been just sorting. Sorting my photos, manually deciding what to keep, what to use, what to discard. Sorting through planning docs and picking out specific phrases. Sorting through raw output text.

notion database showing image sorting
What my image database in notion looks like

Its like excavation of my personal archive. Its enjoyable, and I feel its a deliberately slow way of working during a time when so many things want to go FAST. Maybe something to consider going from here is how to develop more methodologies to work slowly with AI.

Touch Point

I admit its been difficult sometimes to work on this project with the current state of AI Discourse. I know that what I do, and what other media artists do in terms of exploration of AI is not the same as what a corporation is trying to do, but I still feel the reverberation of the public opinion.

Either way. I’ve been really enjoying digging through my old photos. I decided in the end to use Clip Interrigator (a weird little art piece in and of itself) and CLIP Prefix Captions to generate text from images. I use these mostly online through CoLab or Huggingface, because the computing resources there are better and I can skip annoying dependency issues locally.

First noticeable thing is how little detail exists in CLIP outputs. Which I find somewhat surprising because generally sites will prompt you to be as detailed as possible when writing captions and alt-text. My guess is it might have something to do w/ the way CLIP is labelled (possibly in rotten ways), or it might have to do w/ the way CLIP works in general. Either way its an image identifying model, and I’ll have to look into its discourse more.

Back to CLIP Interrogator tho, this is a program made by a user named Pharma. It uses two models: CLIP and BLIP and is generally used to generate prompts to feed into models like stable diffusion to generate similar images. So its not what someone might officially use to caption anything, but I find the output to be kind of unpredictable. Here are some examples of it generating prompts for a picture of train tracks.


Tho I admit that sometimes just CLIP Prefix tosses out a weird one now and then. The weirdest being my boots on a pebbled beach. I rather like how short CLIP Prefix is.

I’ve been doing some base generation and also playing w/ a newer Phomeo Printer. I’ll write an update for that in a different post though. I’m considering switching to the Phomeo because its image handling is much better, but that also means altering some of this project to forgo my printer code, which I feel is probably ok. Its pretty old code, and maybe I can use some custom Python for assembled vs printer control.