UKAI Residency

So I got into a small residency with UKAI, which is a Toronto based arts group looking at AI, and structures to support art in a new kind of future. Its been good to be in this world again, albeit I don’t have the same capacity as I used to have. For this one I knew I wanted to work with cameras on some level. I had this idea a while ago of security cameras that look at boring stuff only, as a continuation of some of the themes on autonomy or useless systems that I was poking in 2019, but that I also looked at in 2022.

In 2022 I went to Stratford and did a mini weekend residency where I made a bunch of autonomous zooms called In Camera Meeting, where different tablets watched a movie together. And in another piece I mounted many tablets in a room, but if someone approached one they would see themselves from a different camera point of view. So it was really about the device perspective.

This post is a bit late, I wanted to write it when I started, but I’m not always great at timing things. Anyways, I spent about 2 weeks experimenting w/ cheap cameras, and setting up a network, and its since evolved from “looking at boring corners” to “looking at one’s self”, but the one here is still the device. And so I’m kind of working in a space where the camera devices I have are not interested in people, even tho sometimes people are around. Anyways I’m going to write two more posts at least, one is for where things are at right now, and the other being about just loose concepts and theory.

Site Chew Up

I sort of lost a few posts and media in a site chew up a little while ago. I feel my drive to archive a bunch of things this year didn’t develop well, but I’m going to try and get a handle on that this winter. Its been hard, with the death of twitter and shuffling around where I exist, to figure out how to show 20 years of stuff. I don’t want it all on my site, because its not all relevant to the now, but it is all connected. I’ll likely end up sorting it on to Flickr. The only platform I’ve consistently kept (and paid for) and has existed for a long time.

I’m also trying to figure out where I am w/ “art” in general. I feel like the last few years have been very difficult in terms of making things, and just…feeling validated, which is important. Its important to know that what you put out there exists to people on some level.

Anyways I’ll be doing some back posting here, because as mentioned I lost a few things. And I need to round some stuff out on local loops documentation.

Maybe I’ll make it a habit to put things here? I’ll try.

Local Loops Small Wrap Up Notes

So I actually finished this by April 1st 2023, I just had some stuff get wiped out, so I’m going try and paraphrase a conclusion best I can.

This project didn’t totally go where I thought it was going to go. I really thought I would continue my look and feel from 2019. Making stuttering loops that break down. But as I started playing around with GPT-Neo more, I realized that maybe the days of stuttering GPT2 were over. Then Chat GPT showed up and all hell broke loose. I won’t lie, it was difficult to work on something like this while The Discourse was happening and it seemed like suddenly everyone was doing something with “AI”. But I dunno, after poking at ChatGPT a bunch I sort of felt like it was _too good_ to be something that I wanted to use. I like when things are weird, I like when they don’t always make sense. Its true that I don’t know all the sources in GPT-Neos base training, but my feeling is that I’m doing an almost dadaist approach to making this anyways. So I started cutting/editing/mashing my way through things, until I found structures I liked, or words that I _purposuely_ wanted to put together. Sort of the say way people do cut and paste poetry.

There’s lots of different ways to guide a machine adjacent process. Sometimes its through programming, where you give it parameters and institutions, and sometimes its through editing, where you use your hand to work the outcomes. Either approach works, as in the end you’re doing something WITH a machine, where you still have input and say into it. It can also be tight or loose depending on what you’re trying to accomplish. I work kind of loose, and I don’t like polishing stuff and that factors into how I consider doing things.

What I found really interesting is how much I went back to just doing straight up print type work. I experimented with different bluetooth printer firmware, and hacks, but none of it really produced the outcome I wanted, and at some point I thought “well I can spend the next week trying to figure out this firmware, or I can crack open Affinity Publisher and learn how to lay stuff out there”. And I really enjoyed it. It was like a full reminder that “oh right I used to do print, I used to do things on paper and then send them out”. Maybe that’s a new/old direction for me? Getting re-acquainted with layout again by hand in a publishing program, vs just trying to code a layout.

In the end I made 2 volumes of zines, and 8 small collages. Because of The Discourse in March I didn’t share them too widely, or share what I was doing too much. But I’ll probably have to update my documentation on Flickr, just to round it out.

Resources

I haven’t made these public, but I used notion to keep track of all the tutorials, articles, and githubs I tried and visited, and now I’ve amassed a good amount of resources that I can revisit for doing similar or related work. These are part of a larger on-going resources list I have in notion, and I just cross referenced it to a specific project.

The other thing is that I finally got to sort through all my walk photos. That really needed doing.

Putting Things Together

After going through the layout process I started putting things together into their little envelops. This part wasn’t that different from my 2019 process. I mostly just let the printer print away, then folded everything over and trimmed down the collages to fit properly. The front I went with something maybe a bit too vector-y, but I liked that I could make them very much a 1 and 2 design and have them be related. One is just the inside of the other.

I feel like the little card collages came out well and working little printer print outs into them was a good move. I feel like I could have done more to integrate the collage-feel into the zine itself in the end, but next round.

Layouts and Printing

So after deciding I wanted to use my small phomeo printer, I tried using a few different libraries and open source hacks to get my older python layout code to work with it. This proved to be sort of fruitless, as I kept hitting roadblocks and it wasn’t really doing what I wanted it to do anyways.

After that, I tried doing some PDF generation with python. I found this to be kind of cool, because I could do some basic positioning and image work. Python has some pretty well documented and robust ways to make PDFs. But again, it wasn’t really going where I wanted, and I found myself asking the same question I had w/ firmware which was “do I want to spend countless hours tracking down how to fix X, or…not?”

After some thoughts, I ended up teaching myself how to use Affinity Publisher. It was weird at first, I felt like I had failed, or was “cheating”, but then reminded myself that yes, I used to do print work…and print was where my entire interest in design started.

One thing that was interesting to me is that, like the text generation and assemblage, I sort of went back to a classical kind of layout. I think I just really liked my photos as is, and the idea of printing a bunch of stuff over them didn’t really appeal to me. I had started w/ doing collages in Affinity Photo, but it didn’t translate to well to the small printer, and in the end I just wanted to keep it simple. Plus I decided that the collages inserted into the zines was a juxtaposition I could live with.

The other challenge was getting the zine in a format for sending to the printer. One of the downsides of not really hacking it, was having to deal w/ the printer software…printer software kind of sucks across the board, regardless of printer or platform. But this was tied to mobile and it was pretty annoying. I tried doing some different work flows like using the iPad version on my computer, and sending things to my phone as a PDF.

In the end the process to print things, was Affinity Publisher > Export as a JPG > Air Drop to my Iphone > use the app on my phone to print to the printer. This I found was the best way to keep the layout as a 1 to 1 and not deal with text replacements (hilariously something I always had to deal with when doing print, so 20 years later and that’s still a thing).

All in all this ended up being a much less code related process that I thought it would be.

Collages and Small Printers

Working on some collage this week using tickertape printer output and and found items. I’m starting to think that I want to use my little BT printer instead of my older adafruit printer, which would change the format of my zine, but overall could be a positive outcome. I feel that I’ll spend more time doing assemblage and less time in software, but frankly I’m fine with that. I’ll have to play around with the Phomeo printer firmware though.

So after hooking up my old Adafruit printer that I used in 2019 and porting over my code, I have come to some realizations that there is a lot of buffer overflow. And I’m starting to lean more towards using my Phomeo which I wouldn’t be able to directly translate the code I have to use with it, but I’m thinking maybe I can repurpose my old zine code to be more of an assembler that outputs a PDF I can print from the Phomeo. I’ll tinker with it. I have found some open source libraries for the phomeo, but because its bluetooth its not as reliable. One option could be outputting everything as a website, and then printing from web. Scaling could be a problem. I’m going to do some experiments.

Little phomeo bluetooth printer

Process and Output

So I’ve had some good raw generations. The process I’m doing here is a variation on the process I had in 2019. I start with a photo I’ve taken, and I feed that into a caption generator and a prompt generator (I might add a third caption generator to the list like Flickr8 but CLIP Prefix / Interrogator has been decent). I play with the settings available to me, generally temperature and top_k until I get something back that I think is interesting.

some very early raw output that is too repetitive.

I run the same prompt text several times to get back a bulk of output. I then look through that raw output and pick out a few specific lines, which I then re-feed back into GPT-Neo. Again I adjust the settings until I feel something interesting comes back. I might do this 2 or 3 times. After I comb through all the output and start editing things into small vignettes. I found that using the single line captions from clip prefix, or editing a sentence together from Interrogator worked better as GPT-Neo prompts that plunking in whole word salads.

One thing I’ve noticed is that I didn’t need as much data as I thought I would for this process. In my original write up I was going to use my journals, and planning docs, and caption output, and I still might…because I spent the time sorting them, but I have a lot of good material to work with just out of prompting GPT-Neo with captions and I’m starting to feel that including too much extra text would just dilute my process.

I also found I didn’t need as many photos as I thought I would, mostly because I’m doing a manual process vs fine-tuning or something more automated. So part of this project has been just sorting. Sorting my photos, manually deciding what to keep, what to use, what to discard. Sorting through planning docs and picking out specific phrases. Sorting through raw output text.

Its like excavation of my personal archive. Its enjoyable, and I feel its a deliberately slow way of working during a time when so many things want to go FAST. Maybe something to consider going from here is how to develop more methodologies to work slowly with AI.

Touch Point

I admit its been difficult sometimes to work on this project with the current state of AI Discourse. I know that what I do, and what other media artists do in terms of exploration of AI is not the same as what a corporation is trying to do, but I still feel the reverberation of the public opinion.

Either way. I’ve been really enjoying digging through my old photos. I decided in the end to use Clip Interrigator (a weird little art piece in and of itself) and CLIP Prefix Captions to generate text from images. I use these mostly online through CoLab or Huggingface, because the computing resources there are better and I can skip annoying dependency issues locally.

First noticeable thing is how little detail exists in CLIP outputs. Which I find somewhat surprising because generally sites will prompt you to be as detailed as possible when writing captions and alt-text. My guess is it might have something to do w/ the way CLIP is labelled (possibly in rotten ways), or it might have to do w/ the way CLIP works in general. Either way its an image identifying model, and I’ll have to look into its discourse more.

Back to CLIP Interrogator tho, this is a program made by a user named Pharma. It uses two models: CLIP and BLIP and is generally used to generate prompts to feed into models like stable diffusion to generate similar images. So its not what someone might officially use to caption anything, but I find the output to be kind of unpredictable. Here are some examples of it generating prompts for a picture of train tracks.


Tho I admit that sometimes just CLIP Prefix tosses out a weird one now and then. The weirdest being my boots on a pebbled beach. I rather like how short CLIP Prefix is.

I’ve been doing some base generation and also playing w/ a newer Phomeo Printer. I’ll write an update for that in a different post though. I’m considering switching to the Phomeo because its image handling is much better, but that also means altering some of this project to forgo my printer code, which I feel is probably ok. Its pretty old code, and maybe I can use some custom Python for assembled vs printer control.

Local vs Not

Y’know I think for this project I might not roll my own things. I’ve been finding some good web-ui based options for generating captions and prompts and, at least for some things, I won’t have to go through the annoyance of running something locally. Here’s a nice example from Clip Interrogator on Huggingface that produces some interesting output I could use.

screenshot of clip Interrogator making a prompt from a log covered in trukey tails

I might even go back to my old computer and use my interactive GPT-2 option installed there. Its older, and it breaks amusingly, but that’s what makes it fun. There could be some old code I translate to a new environment and newer models, but maybe….maybe I don’t have to. I’ll think about it.