I figured out my dialog flow, and ended up just doing a simple Yes / No structure. My issue was that I was trying to be too subtle. These devices aren’t built for subtilty. So weirdly now I have a pretty good feeling flow and system, and it even updates and tries and regulate itself a bit.
Its interesting how that can come together in the end. Its not done, not by a long shot, but as a first iteration it works well. I’ll write a wrap up in the coming week. But for now its open studio day, and I have to get ready for that, and also do some video documentation tomorrow.
I think I need to revisit Alexa’s and Google’s API on a base level. There’s been a lot of changes in the last few months, and I have some thoughts about things.
I feel like maybe I didn’t use my time here effectively. Instead of trying to make a larger system, I possibly should have made smaller vignettes. Maybe I’m just feeling it right now. We have open studios on Wednesday and I’ve already accepted that most people won’t know WTF I’m doing, but I also feel very tired as I’ve run into road blocks here like sleep issues, and radiators failing, and other external things causing me problems. Including having to switch rooms again, making me edgy. Because suddenly I’m in a different space from the space I’ve been occupying for 4 weeks.
I don’t do well when I have sleep issues. I become pretty much Not Human. I cry for no reason. I have issues being even remotely social. I get angry. I shitpost. You’d think this would be a good mindset to be in to make a depressed alexa, but its not. I’m very, very aware of what is going on, but I also know I can’t do much about it, but let it go on. So I just poddle through, and hope that I’ll recover the next day or so.
I’m also a little unsure what I’m going back with. Its larger, yes, and based on system dynamics, yes, but the front is still not great. I also read that the small framework I like using is possibly abandoned, and I hope not.
Anyways, what I’m getting at is that the last week was really difficult.
So I had a mental block for about 4 days. Which happens sometimes. But working on the front responses is pretty difficult. Mostly because I can’t just make up some stuff and toss it in there. It has to be based on something I already have existing in the system. I struggled a lot to figure out how I wanted to use mood / perception etc. And it took a while of reading and being grumpy about for it to kind of start taking shape.
I decided to try and toss everything into a BDI kind of system for responses. So the device generates an internal goal / belief that you don’t know about. I’m still working it out, but its getting there. Its funny that I’m working with a VUI and the actual voice responses are about about 10% of the work that I’m doing. But if I’m being totally honest, I’d love to work with a writer on those parts. Because I’m pretty amusing, but it doesn’t always translate, and there are people out there way better at dialog than I am.
Anyways, that’s where I am right now on work. Studio visits have been interesting. I’ve gotten some feedback about how to maybe rig this for a gallery, but I’ve also gotten a lot of feedback that’s noted that I’m not necessarily making gallery focused work. That I’m really doing art as research. And that’s been a bit of an eye opener. There still remains the intrinsic issue of how do you SHOW that work. Especially when so much of it is software, or system based. An interaction with the device doesn’t show the complexity happening behind it, and a video while useful for documentation and context, doesn’t always show that this is an actual device in space that can be used, and not just a made up thing.
Anyways its an ongoing problem and question, and I’ve posed it to everyone that’s stopped by. I’ve gotten a variety of responses about it from making pretty-ed up charts, to making a contextual book, to making photo tryptics of Alexa’s life. I don’t know. Its a hard thing.
If you had told me a year ago I’d be reading psychology papers to get some ideas about coping strategies for depression and stress to write responses for an Alexa, I probably wouldn’t believe you. But here we are.
So going into the fourth week of this residency, I’ve reached a point where I have my back part done, which is good. So each day the device generates a stress level, physical level, mood etc, and saves that. Then draws on that during the day. I’ve also finished a rough function for updating it, which works and all my response and IoT implementations. None of this is even close to polished, but it works, and its mostly consistent. I might have to do some tweaking, but hey it works, so let’s go with that.
So now I’m working on the responses. A lot of people consider this to be the fun part. Its the responses! But I actually find it the most difficult, probably because I’m not a writer. But also because I can’t just barf out some silly responses, they have to utilize the variables and mood.
At first I was considering making different variants on the actions around different variable levels. Or getting the mood to affect its tone of voice. But I wasn’t really sold on that. So now I’m thinking about using the mood and perception and other variables to set an internal goal for the device, and have that influence what kind of responses and actions it does, and make actions based around coping mechanisms to try and accomplish that internal goal. I read a while ago that BDI systems can sometimes get fixated, which isn’t good. But I think that might be fun to play with on some levels.
The first two categories I kind of glommed onto were ideas around Action Orientated Coping and Avoidance Orientated Coping. So this could be things like, if the device has a really low physical score and is in a bad mood, maybe it badgers the user or misdirects them if they want to use the blender, to try and get them to use one of the other less taxing actions (eg checking the news). Or maybe if its stress level is really high, then it tries to dissuade the user from checking the news and pushes them to use the record player or just play with the lights.
I’m considering if I want to make responses that also ask the user for assistance. Or include them in more avoidant coping, like for example, the device has enough spoons to use the blender, but has an internal goal of conserving its spoons, so instead it tries to tell the user a food joke, or asks if maybe they just want to sit down and listen to a song with it. This could go badly, but it might be really interesting.
A friend here suggested working in a “do it for me” route, for when the device is totally out of spoons. Where your interaction with it, is the Alexa telling YOU how to complete the task you want, because it just can’t anymore.
Anyways, that’s what I’m working on this week. I might get far, I might not, and be working on that part at home. But either way I feel pretty good about where I am with this, and what I’ll be leaving with.
Oh yeah! We finally had a snowy day.
Also I just wanted to share my favourite thing in Glyde Hall, this ancient, sketchy phone in the elevator.
I had a really interesting conversation today with the research practicum today. And it got me thinking about how to document the work that I do, because so much of it is invisible, its code and planning, and maps. I really feel like the process of what I do is as much the work as the final output. Maybe even more. I’m starting to think that I’d like to make a catalog or small book of my time here. Things I’ve read, things I’ve built, and take some really nice photos of my work space, and diagrams. Maybe ask one of the photo ppl here to help me out with that.
There’s always the question when you make digitally based ephemeral art, which is what are your artifacts? What do you display? What do you keep for the record.
Anyways, a small book. Might be nice to do some design things again in that vein. A way to tie my past as a graphic designer into things I’m doing now? A way to do some more visually based stuff around the work I do which is very not visual at all.
Sometimes I don’t sleep well and I’m not productive. On those days I grumble a lot, and then spend some time reading in my room drinking tea. I’ve got a few on the go, but this one is pretty fun.
I sometimes feel this pressure that I should spend every waking moment in the studio, but I realize that’s not always realistic. Whether here, or at home. There are days you just don’t want to be in a room or a building anymore no matter how nice it might be.
Granted, this is a first stab at this kind of thing. Which means I’m relying on some programming that I’ve done in the past. My original prototype SAD Blender used the weather to create its mood. But one thing that really bothered me, is that it doesn’t HOLD its mood during the day, and relies a lot on random numbers. I realized I wasn’t going to get AWAY from using random numbers totally, but maybe I could jig the spoons engine to rely on them a little less, or at the very least, in a more controlled manner.
A lot of the programming I did was mostly augmenting numbers through if/else statements. I realize its not elegant, and that maybe in v2.0 I’ll look into some more data science based techniques, but for now, if/else works pretty consistently. It was also the method I used in SAD Blender.
As it stands, the first end of the engine is done, which is the set state. Or the part of it that runs at the start of the day and sets the buckets (stress, physical state, mood, perception, and eventually spoons).
In the case of stress. I’m defining stress as EXTERNAL stressors. These are things that happen outside of your own body. I’m doing a few things to generate this score. I still start with a somewhat random base number, and I’m still looking at the weather to influence stress. But I’m also checking what events and unread emails I have waiting for me currently. This adds to a busy function, which further augments it. All my buckets are out of 10, which is a bit arbitrary, but mostly because it makes it easy for me to adjust the number (eg: if shit is terrible, upgrade your stress by 2, etc). Except for spoons, which is out of 100 and based on percentage. I’ll get to that later.
Physical State is a combination of internal influences based on your body, in particular sleep and illness. Again I start w/ a random base number, and augment it based on those states. But Physical state is also influenced by stress. And so I’m using the stress score to further augment the physical score.
I think by now you can catch the drift of what I’m doing and how. I continue to pass stress and physical state into a mood function. Mood isn’t random, it starts as neutral each time (so 5). And is augmented by stress and physical.
After that I pass all three into perception. And this is where I assign a rating scale of good / bad / terrible. There’s some weird language moments like “stress: [6,’low’]” is bad…but y’know first stab, because I had to keep the descriptors the same for further functions.
When we get to spoons I kinda flip to generating a percentage out of 100. My reasoning for that was, when considering actions or tasks that the device will have to perform throughout the day, it might be easier to adjust the variables based on 10, but the overall spoons on a percentage of X number. So for example, if you’re starting with 40 spoons (or basically operating at 40% capacity), it might make more sense to say that “automating the lights needs 5% of your spoons). I could be wrong, and in practice it might change. But for now, I’m going to try it. In this case, I’m not really looking at increasing spoons. Just taking from them, as when you’re sick, even GOOD things take their toll, and the idea is that you have to decide what to spend the spoons on.
I’m just starting to think about how this is going to work. Which is going to be my update in week three.
When I was working on my thesis last year, one of the thing that really stuck out for me was Norm White talking about his project The Helpless Robot. I remember reading a text in which he stated that when you’re creating a personality, you’re just never really finished, and that really stuck home with me when I started working with personal assistants. They exist in a current state of constant flux. Never really done, growing, but incomplete. And their personalities, well, they leave a lot to be desired. So rather than focusing on trying to GIVE an Alexa a personality, so to speak, I decided to try and focus on moods. Lots of things can have moods. Moods are, to a degree, attainable, at least from a systems approach. There can be many things that affect a base mood.
I started off with just whiteboarding some ideas, and trying my hand at some loops. That was alright, though I do admit, I’ve never been a great planner. I tend to sort of do a little planning and then when I feel I have enough to start, I start, letting things kind of develop from there. But here were some of the thoughts I had going into this.
Eventually I settled on some basic buckets like Stress, Perception (negative, positive, or neutral outlook), Physical State (things like illness, and sleep), Mood (good / bad etc), which would all contribute to the overall idea of Spoons. Or basically how resilient the device will be during the day. Spoon Theory comes from a 2003 article written by Christine Miserandino as a way to describe Lupus to someone who doesn’t have it. Its since been used to describe the struggles involved with many kinds of invisible illnesses.
The basic idea around spoons is: You only have so many spoons to spend in a day. So things like waking up, brushing your teeth, going to work etc, all cost spoons. Once you’re out of spoons, you’re pretty much done. Some days you might have a lot of spoons, other days, not so much.
Taking this idea, I decided spoons would be an interesting way to influence what my depressed alexa would be able to handle during the course of 24 hours. So I set about in the second week of Banff to make a spoons engine.