press escape?

i was born in the darkness, and there i was raised
ā€”the world is only as small as your cageā€”
a sapling rooted under cover of brush
is doomed at birth to turn to dust
but bless those hands that peel away
what otherwise tangles and decays
the gentle slope of a sunlit beam
is enough to unfurl dormant dreams.

ā€”

title: press escape?
format: 4k mp4
duration: 17sec
process: fully created and animated using blender
by tinyrainboot, 07.2024

ā€”

available as a 1/1 on the new lens-based platform, mystic garden: https://www.mysticgarden.xyz/gallery/0x012a99-0x046c

minting now: the contours of a dream

so pleased to present to you the contours of a dream, which is live-minting on https://playground.ink/tinyrainboot/ for the next 2 days. that means that every piece is unique and generated when you go to mint it!

the contours of a dream is about the murky spaces that comprise the human psyche: what secrets lie in the unexplored corners of (sub)consciousness?

the collection follows the path of a faceless figure wandering through chaotic and confusing states of mind, ever in search of scarce moments of tranquility.

these dreamscapes, ā€œimaginedā€ with the help of AI, encourage the viewer to reflect upon their own disposition. awake or asleep, where does your mind take you? what is it trying to say, and are you listening?

my favorite piece minted so far

read the full write-up: https://paragraph.xyz/@tinyrainboot/the-contours-of-a-dream-coming-to-playgroundink-on-july-16

and watch the trailer:

check out all the pieces minted so far: https://playground.ink/tinyrainboot/

and let me know if you pick one up <3

trailer: the contours of a dream

date: july 16, 8pm CET
platform: playground.ink on solana
price: .3 sol
the collection is limited to a maximum of 333 pieces.

set a calendar reminder: https://calendarlink.com/event/Srzuk

read more about it: https://paragraph.xyz/@tinyrainboot/the-contours-of-a-dream-coming-to-playgroundink-on-july-16

the contours of a dream ā€“ coming to playground.ink on july 16

i awaken in a world that isnā€™t mine. eerie and desolate, the weight of a loss i canā€™t quite remember washes over me. i thought i knew what time was, but it has lost its meaning in a dawn that stretches its fingers toward eternity. the air is different here: sharper, thinner, lighter. can you imagine the sound of a planet where you are the only one breathing? i do not know if this is future or past, whether this place is forgotten or still unknown. my unspeakably fragile vessel of sinew and synapse is all that carries me forward. have i been here before? or am i simply tracing the contours of a dream?

about the collection

the contours of a dream is about the murky spaces that comprise the human psyche: what secrets lie in the unexplored corners of (sub)consciousness? the collection follows the path of a faceless figure wandering through chaotic and confusing states of mind, ever in search of scarce moments of tranquility.

these dreamscapes, ā€œimaginedā€ with the help of AI, encourage the viewer to reflect upon their own disposition. awake or asleep, where does your mind take you? what is it trying to say, and are you listening?

each generation is completely unique and created at the time of mint

the contours of a dream is a long-form, live-minted AI collection: each piece is uniquely generated at the time it is minted, tying the image seed to the blockchain transaction hash. this approach adds an element of randomness: although guided by parameters, or ā€œcontours,ā€ AI fills in the rest of the ā€œdream.ā€

collection themes

growing up as a ā€œthird culture kid,ā€ and a "girl online," much of my work focuses on themes of connection, loneliness, longing, and what it truly means to be human ā€” particularly in our hyper-technological era: never before have we been so connected, yet felt so lost.

the contours of a dream delves deeper into these topics, exploring dark and isolating mental states, contrasted with occasional glimpses of a potentially different, more serene reality. it asks the viewer to explore their own psyche, to take a look at disturbances just beneath the surface, and to think of ways to confront them.

if you've been following me for a while, you might also notice red threads to some of my earlier work, such as tell me, stranger ā€” what haunts you?

whitelist (free mint) giveaways & logistics

stay tuned on farcaster / lens / twitter as i will be announcing and running a giveaway for whitelist spots (free mint) very shortly.

july 16, 8pm CET: whitelist mint
july 17, 8pm CET: public mint, 48 hours
platform: playground.ink on solana
price: .3 sol
the collection is limited to a maximum of 333 pieces.

add a calendar reminder: https://calendarlink.com/event/Srzuk

āœ©ā€§ā‚ŠĖšā™”

what it feels like to live outside of your body

i can remember vividly one of the first times i had an episode of depersonalization/derealization. i was twelve or thirteen. sitting on my bed, i looked down at my legs when a sensation of what i can only describe as a gulf between my vision and my self washed over me. "those aren't mine," i thought. "are they?"

they didn't feel like mine.

these episodes were to worsen as i got older. there were moments where i simply could not reconcile my body as belonging to me. mirrors i could control, but other reflective surfaces were enemies. my arms and legs were foreign objects. sometimes it felt like i lived in a glass box that kept me separate from everyone.

seeing myself in a photo brought on a cringe i could only define as convulsive. i did not want that to be "me." i did not want this body. i did not want this life. i did not want to be perceived. my body was a cage and i didn't know how i became trapped in it.

there's a poem i found on tumblr in 2015 (peak tumblr years -_-) that describes the sensation well. here's an excerpt:

In the dream I have a body but itā€™s not mine; I am an intruder wearing a suit of flesh with skin that has turned into granite.

I do not feel.                                      
I feel too much.
There is nothing.                                     
Everything is overwhelming.

In the dream we are machines. No emotion, just flatness: programmed thoughts, automatic speech and action without awareness or control. They call it a coping mechanism, and so I think of pulleys and gears pulling me up to sit somewhere in the top of my head and watch through a frosted lens

while someone else grips the controls, moving this body through the motions of living. Unfamiliarity in familiar places. Friends are strangers and strangers are blurs of colour, dabs of acrylic against bleached watercolour. Fog fills my mind, pressing against the glass that separates me from the world.

I bang hard on it. Bang. Bang. Bang. Let me out! You tell me that I locked myself in this steel-walled room. I say why would I do that? You open your mouth to explain but I canā€™t hear you over the white noise. Buzz. You pinch my armā€” a whisper: not dreaming. Bruises that fade to grey, cement skies, ash world.

In the dream that is not a dream, my hands turn into birds and fly away from me. You catch them and try to give them back, but I refuse. They arenā€™t mine, I tell you even as you push them back onto my wrists. You ask me whose they are, then. I donā€™t know. I donā€™t know. I think I once knew someone who had these hands, but I donā€™t know where they went.

ā€” Martina Dansereau, here's the only link where i found the full version

why am i writing about this? to get it out. to iron it out of me. (it's almost gone.)

you might notice reflections of it in my long-form ai collection, the contours of a dream. it uses ai to explore dark and isolating parts of the psyche - scattered with glimpses of hope.

i'm curious what you'll think of it.

(july 16)

Has AI killed the artist?

You know that feeling when you expose something personal about yourself and it is met with a wall of gut-wrenching rejection?

It was last fall that my grandmother found out I was making art with AI. That this information reached her in the first place was not by my own volition - I know who to pick my fights with. We were having lunch when she brought it up, berating me for wasting my artistic talent on machines. I tried showing her some of my art, attempting to explain myself conceptually, but she wasn't willing to listen. I hated myself for letting her make me feel so deeply ashamed of something I was excited about.

That's not to say I don't understand her concerns: AI anxiety is a real phenomenon. We are facing many huge and unanswerable questions about how AI might change the fabric of our society as its effects ripple outward. But as a young-er person, I feel more optimistic than anything else about the potential positive impacts of this technology. From medical and scientific research, to augmenting our personal abilities, to giving us more time in the day, and yes, even to art, the possibilities are tremendous.

Memories of Passersby I, 2018, Mario Klingemann

Iā€™ve been making and posting my art online as Tinyrainboot for around two and a half years now. Back then, I started a project called ā€œInternet Diary,ā€ where I used AI to express ideas or excerpts from my daily experiences. I didn't have the energy at the time for my usual creative outlets, such as painting and writing. This was something different, something exciting, and above all, something I could do surreptitiously at my work desk. Plugged into Stable Diffusion, my thoughts became prompts, generating weird and unexpected visual outcomes ā€“ and so began my fascination with AI art.

Some of the earlier pieces I made as part of Internet Diary.

Those of us who interact with this discipline on a daily basis often tend to forget just how new it all still is. It's been a mere 3 years since DALL-E came out, which exposed text-to-image technology to the masses for the first time ā€“ in turn raising many questions about its implications for the future of our society and the careers of those in creative industries.

I'd like to contextualize this present moment by briefly discussing how we ended up here. 

From the announcement of OpenAIā€™s text-to-video generator, Sora.

During a recent talk I gave at GenAI Zurich, upon which this article is based, I went into the history of computer and AI art and gave a nod to the pioneers whose footsteps went long before: Vera MolnƔr and Harold Cohen, for instance. There are fantastic resources out there that can summarize all this better than I can, such as this comprehensive timeline by Le Random.

For the trajectory of the AI art space in particular, though, the year 2014 marks a pivotal moment.

Generative models have existed for decades already: a generative model learns a dataset's properties and then is able to create new data that statistically fits in with the originals. But in 2014, Ian Goodfellow developed a new approach to generative models: generative adversarial networks (GANs).

The revolutionary component is this: GANs pit two nerual networks, or machine learning algorithms, against each other. One is the "generator," attempting to generate an output that matches a collection of examples, while the other is the "discriminator," attempting to distinguish between the real dataset and the outputs produced by the "generator." Through many rounds of this competition, the fake detector pushes the fake generator to improve. In early iterations, Goodfellow's GANs were used to generate simple images of handwritten characters, faces, and even quasi-photographic scenes.

In 2014, a GAN generated human faces for the first time. The rightmost column shows real photos used to train the system.

In 2017, this technology took another leap forward, when a project called CycleGAN demonstrated how GANs could be used to modify images, such as converting an image into the style of a specific painter or adding zebra stripes to a horse. See where this is going?

This technology was used by OpenAI's DALL-E, which launched in January 2021 and, soon enough, brought text-to-image to the public's attention. Suddenly, the most bizarre combinations of ideas could be combined visually, brought to life by an algorithm.

Samples from OpenAI's DALL-E announcement in 2021.

Today, most commonly-used AI image generators use diffusion models, rather than GANs, for higher-quality outputs. They use forward and reverse diffusion to add and remove noise from an image. Diffusion models use an iterative refinement process and are less likely than GANs to suffer from "mode collapse," which is when the algorithm gets "stuck" creating outputs that are repeated or highly similar.

Nvidia

Particularly due to a lack of transparency around how image-generating models have been trained, AI art has faced harsh criticism and been steeped in numerous controversies.

The collective behind Portrait of Edmond de Belamy, the first AI artwork ever to be auctioned by Christie's in 2018, faced backlash when it emerged that they had used the code and dataset of another artist, Robbie Barrat, to generate the work. A slew of ethical, moral and legal questions are intertwined with the discipline.

Edmond de Belamy, 2018, published by Paris-based collective ā€œObvious.ā€

But does AI really represent the end of art as we know it?

When the camera was invented, some declared it the end of art, arguing that since taking a photo required less effort and skill than painting, it was the device, not the human, that was responsible for the final image.  ā€“ James Bridle, The Guardian

I know, I know, this parallel has been beaten to death already. But I believe that this comparison glosses over an important difference between AI and the camera. What I want to point out is that we face a fundamental lack of alignment when it comes to the general public's understanding of what AI art is and how image-generating algorithms work.

Researchers at the Postdoctoral Institute for Human-Centered AI at Stanford University ran a controlled, randomized study, and found that half of participants saw AI simply as a tool, while the other half viewed it as an autonomous agent, with its own belief and intent.

One of the challenges we face here is the anthropomorphization of AI. We say that AI ā€œhallucinates,ā€ or ā€œdreams,ā€ applying human characteristics to the technological process by which algorithms calculate their outputs. Studies have revealed that anthropomorphization affects trust, thereby creating an obstacle for the accountability and governance of AI systems. In short ā€“ it's complicated, and better understanding of how AI works will be needed to get us all on the same page.

Bridle goes on to say,

There is no true originality in image generation, only very skilled imitation and pastiche ā€“ that doesnā€™t mean it isnā€™t capable of taking over many common "artistic" tasks long considered the preserve of skilled workers. ā€“ The Guardian

It's undeniable that many jobs in creative industries will be affected by AI. Jobs will be lost, jobs will change, and new jobs will be created. If you're a product photographer or graphic designer, you have certainly already seen a significant impact to your industry brought on by AI.

But I have an argument to make here, which is that this bland summarization of what AI art is not only glosses over the human component, but also fails to recognize that art is often not just about the final image.

Freefall, from Life in West America by Roope Rainisto, 2023

When we instead view AI art as a form of conceptual art, it becomes much more interesting in its marriage of man and the machine. The idea behind the work and the process to create are more important than the outcome ā€“ and this is where I believe our generationā€™s artists will find new ways of pushing the boundaries of creation.

If you consider the whole process, then what you have is something more like conceptual art than traditional painting. There is a human in the loop, asking questions, and the machine is giving answers. That whole thing is the art, not just the picture that comes out at the end. You could say that at this point it is a collaboration between two artists ā€“ one human, one a machine. And that leads me to think about the future in which AI will become a new medium for art. ā€“ Ahmed Elgammal, director of the Art and Artificial Intelligence Lab at Rutgers University

Thereā€™s a lot of fear around AI and how it will change our status quo. The rise in AI-generated content on social media has even been called the ā€œenshittificationā€ of the internet.

But what a lot of people tend to forget in their fear of change ā€“ in their fear of the new ā€“ is that we, humans, are the key component. We define and contextualize what is interesting to us. Justin Hanagan from Stay Grounded draws a parallel I find very interesting. He compares today's AI debate with the first time a human chess master was beaten by a computer in 1997:

What is interesting about a chess-playing computer is not that itā€™s good at chess, itā€™s that it exists at all. The interesting thing about a chess-playing computer is that some former tree-dwelling primates arranged tiny bits of metal and silicon in such a way as to coerce the universe into playing a game better than any other tree-dwelling primate could dream to. ā€“ Future Grandmasters of the Attention Game

When is the last time anyone cared about watching a computer beat a grand master at chess? Or watched two super computers fight each other at chess instead? Two-plus decades?

Humans care if something is interesting, and novelty can only be interesting for so long once there are no humans involved.

Let me reiterate: the human component of AI art is what makes it valuable. So could it be, that rather than witnessing the death of artistry as we know it, we are witnessing its modern renaissance? Instead of focusing on aesthetics, we are being given a chance to redefine the meaning of art and where we can take it. Shouldnā€™t this push creatives to try even harder? To find the boundaries, and new ways of breaking them again and again?

Sample output from my upcoming long-form AI collection, The Contours of a Dream.

I believe that we are only just beginning to uncover what we can create, empowered by new tools and our evolving relationships with them. Exploring where this human-machine dialogue takes us is the most interesting part about AI art.

Naturally, we can't say yet how AI will impact the longer-term creativity of our society, or how it might affect our brains, thinking patterns or learning styles.

And all I can say now is that AI certainly hasnā€™t killed the artist yet. I have no crystal ball, though ā€“ maybe it still will. 

ā™” tinyrainboot

#research#article

pocket (ai)nimal

This is my submission for Claire Silverā€™s 7th AI contest. The final piece is an AR experience, found at the below QR/link.

https://flowcode.com/p/JVtI9uA9G

Tools used (AI marked with āœ©): 
Stable Diffusion XL (via Replicate)āœ© 
Pixlr AI photo editorāœ©
Spline image-to-3Dāœ©
Blender
Anything Worldāœ©
Geenee AR

I have really been wanting to play with AI and 3D lately, but hadn't yet found the time, so I felt this was the perfect opportunity. After all, the instructions were to "find a rabbit hole that interests you and go down it."

My idea was to create a little 3D creature using AI and then bring it to life with AR. I havenā€™t done something like this before, so I started off by researching tons of tools and then narrowed it down to what I thought would work best. I had a few failed attempts along the way, but I'm happy with this result!

Keep reading for the full write-up (or consider it a tutorial, if you want to make your ownā˜ŗ).

Creating the base image

I started by creating a creature with Stable Diffusion that I could use as a base for the model. You could skip this step and go straight to a text-to-3D generator, but since those can be expensive to use, it may be better to play around with the visuals using a cheaper option like SDXL. I really like using Replicate because itā€™s altogether rather inexpensive. Midjourney is also great for this.

Hereā€™s my prompt:

Centered front view of a stylized cute tiny pink bird, white background, RPG game asset, unreal engine, ray tracing, perfect symmetry, flat lighting, full-body shot

I realized after trying this a couple of times how important symmetry is for the rigging step, so I optimized for that. I brought this image into Pixlr because they have a nice AI cutout tool. So with that, I was able to get my creature onto a transparent background with very little effort.

For work that doesn't require high precision levels, the AI cutout tool on Pixlr is quick and free.

I also wanted it to be perfectly symmetrical, so I duplicated the side with brighter lighting, flipped it, and merged the two pieces. A little bit of gentle blurring to fix up the line at the middle and it's looking good.

Ready to become 3D!

For the image-to-3D part, I tried out sloyd.ai, 3D AI Studio, and Spline. Sloyd didnā€™t work for me because it canā€™t do animals yet, but I mention it anyway because it has some other interesting applications. 3D AI Studio was not bad, but my outputs tended to have extra limbs. They have a good remeshing tool, too. Spline lets you do a couple of free generations, and the initial outputs were even better than I expected. I paid for a credit top-up so that I could play around a bit more. 

I pulled the transparent .png that Iā€™d prepared into Spline. It always offers you four outputs, and the first one is usually the closest to your input image. I assume that adding a prompt here affects how the other three turn out.

A cute symmetrical game character bird

I went through each of the outputs and ended up picking the fourth one because it is soooo cute and grumpy.

The preview mesh actually looked pretty good, but he (it's a he now) doesnā€™t have a butt and his feet are wonky.

Cleaning up the model

So then I downloaded the .glb and imported it into a new Blender file. Even if you have never used Blender (itā€™s free!), I hope to offer you a very simple walkthrough to fix up your model - I think anyone can do this. You can also completely skip this part, but you may need to find another way to extract your texture from the model as a .png for the rigging step.

Steps in Blender:

  • Create a new general file.

  • Delete all the existing objects by pressing ā€œaā€ and then ā€œx.ā€

  • Go to File > Import > glTF 2.0

  • Find and import your file (you may need to add the .glb extension to the file in file explorer so that Blender recognizes it).

  • Click on the object once itā€™s been added and press ā€œtab.ā€

  • This brings you into editing mode. Press ā€œaā€ to select all and make sure vertices mode is selected (see pink arrow).

  • Now press ā€œmā€ - a box will pop up. Click ā€œby distance.ā€ This cleans up the mesh a bit by removing extra vertices.

  • Press ā€œtabā€ to go back to object mode. If you want to see the texture better, you can use these settings:

    • My suggestion for the easiest possible mesh cleanup is as follows: all you need to do is add and apply two modifiers.

    • Go to this tab on the right side (with your lil guy selected still):

    • Now click ā€œadd modifierā€ and search to add ā€œmirrorā€ and then ā€œsmooth.ā€ (You can type it in.) By mirroring over the X axis, any weird deformities should be fixed as the mesh gets filled by what is happening on the other side. For me, this fixed the issue with the feet, for example. The ā€œsmoothā€ modifier also tends to fix any weird, sticking-out parts. You can set the factor to 1 and then the repeat to 1, 2, or 3 - whatever looks better to you.

  • Then once you are happy, apply both modifiers using the down arrow next to them.

  • If you need to make body adjustments, you can go into sculpt mode.

  • This is what I did to give him a booty: enter the sculpt tab, then make sure you have x-axis symmetry selected (we still want to keep our guy symmetrical as this will be required for the following rigging process). Press ā€œgā€ and this pulls up the grab tool. You can adjust the strength and radius at the top. Then just tug around a bit at your mesh until you get the desired result.

  • Okay, should be good. Now we need to export two files: the texture and the model. The texture is an image file. Click on the ā€œtexture paintā€ tab. On the left, you probably see a messy-looking image. This is whatā€™s giving our guy his color. We need to save it separately from the model for the rigging step. Click Image > Save As > and then save it as a .png.

  • Great, now we just have to export the model. Normally a .glb file should work, but I was having issues with the texture showing up in the next step so I exported mine as an .fbx instead. Make sure your character is selected, then File > Export > FBX > and make sure to click ā€œlimit to selected objectsā€ and then save.

Rigging and animating the model

Now that the model is looking better, itā€™s time to animate it! I couldnā€™t use the Reallusion rigging tool because it doesnā€™t have Mac support. Sad face. But I found a wonderful alternative called Anything World that has an AI rigger. That means they use AI to add a ā€œskeletonā€ to the model and then animate it.

So, it says ā€œAnimate Anything,ā€ but there are some exclusions for now. Weā€™re still early! On their upload page, under ā€œmodel processing constraints,ā€ you can see what is included for now. You also get a few credits to try it (I had to buy a credit pack because I did this a few more times, trying different approaches and models. But I donā€™t mind supporting projects like this; Iā€™m excited about what theyā€™re working on). 

Go ahead and upload your model - youā€™ll need to include both the .fbx and the .png here. 

Then youā€™ll add a name and choose a type and a subcategory if available. I ran this twice, both with ā€œhopping birdā€ and ā€œwalking bird.ā€


It takes a few minutes to rig the model with AI. While I waited, I researched baby names and decided to name my guy Clyde.

Why all the hate on the name Clyde?

On the next screen, youā€™ll confirm the skeleton, and then wait again for it to add the animations. This is the most exciting moment! The animations are ready!

Clyde lives and breathes! I was dancing around my living room at this point. Itā€™s so cool. I almost couldn't believe it worked. You can download all the animation files or just the .glb (thatā€™s the one you need).

Placing the model into AR

Now it's time to give Clyde a home in the metaverse. After researching a few AR apps, Geenee seemed to fit my needs the best. This tool is pretty awesome - itā€™s geared more towards the fashion industry, but honestly, it suited this use case perfectly, too. 

You can add a little graphic and description to the entry screen. I made one in Blender.

Next, click ā€œadd sectionā€ and then choose ā€œAR buildā€ and then ā€œWorld AR.ā€ Now we can design a full scene. You can add up to 40mb of objects, but I just want Clyde to be chilling there solo (at least for now).

Drop in your .glb file that you exported from Anything World. It's that simple.

You can click on the little plus button next to the image icon and add more scenes. I have three versions of Clyde - one where he is walking, one idle, and one jumping. So I added all three here, each as different scenes. And then you can drag and drop images onto the icons if you have multiple scenes.

So easy. You can preview and publish from the top right!

Make sure your phone or other device has camera access enabled. ā˜ŗ

Exploring the world

Now you can take your creature with you anywhere - even places pets arenā€™t allowed, like the supermarket, doctorā€™s office, or even the nightclub. Heh. 

If you try this out, let me know how it goes! And if you get stuck or need help, I'm here.

ā™” tinyrainboot ā™”

Clyde out for a walk, at a bar, in the bath, and just hanging out.

P.S. - Thank you, Claire, for the initiative! I'm looking forward to seeing what else I can use this tech and workflow for.

catch-up and kafka (again)

today, june 3, marks 100 years since kafka's death at age 40 from complications due to tuberculosis. his fascinating mind has left its indelible imprint upon our collective consciousness: how many other writers have been worthy of their own adjective?

like basically everyone else, i read metamorphosis in high school. but taking a deeper dive into kafka's life and other works, it's clear how much of an influence he has had on the creative minds of the last 100+ years.

yet i am most fascinated by what kafka has become in the 21st century: a romantic icon, much-adored by booktokkers and lonely internet girls. his letters to milena stand out the most here, of course, and perhaps even more so because her letters are destroyed: we see only his side of things, and in some way it is like the letters are written straight to the reader's heart. we become milena, living a brief and tumultuous love affair that in reality occurred far more within letters than outside of them. the kind of love that breeds the most creativity is the kind that lives in dreams.

it is 2024, has the love letter died?

updates: i have a live-generated collection coming out this month i'm really excited to share it with the world. more details coming soon. some other exciting little projects are ongoing as well!

latest work:

i started a new series called "autocomplete." i used to make these back at the beginning of my internet diary project, over two years ago... itā€™s interesting to explore yourself through your predictive text algorithm: what does my algorithm tell me about myself? are there hidden truths to be found in predictive text?

i'm thinking about ways to expand this and make it more interesting.

and another piece i made this week: "it wasn't your fault."

detail:

today i'll finish up my submission for claire silver's 7th ai contest; it's something a bit more funky and fun. ā™”Ėšā‚Šā€§āœ©

internet diary, may 14 2024

yesterday you sent me a message: a polite nudge that you meant as a kindness, but simultaneously revealed everything about you i'd forgotten on purpose. that you pray to the shiny, golden-haired women of the west coast ā€” goddesses you chose long ago ā€” who are no wiser than you or me, but nevertheless sit glowing on their pedestals, smugly chewing their cud. (like mother birds who eat poison and spit it back into their babies' beaks.)

friendship shouldn't be so tricky, or that's what i tell myself. still, i decided to stop writing love letters for people who don't know how to read. i thought i saw a light in you once, and then i watched it slowly flicker out. sometimes the ambulance comes too late. (i realized i don't know who you are, and i never did.)

i'm thinking a lot about dreams these days. i wrote an incredible song while i was asleep; you'll have to trust me on this. i tried to wake myself up to write it down, but slumber felt too good. i enjoy my stays on the dark side of the moon. i wonder if i belong there instead, and daylight is the real interlude. there's this book i read once, about a girl who spent a year sleeping. she took a lot of pills and teetered somewhere between life and death until she came to her senses eventually. (that's one way to leave it all behind.)

sometimes i, too, want to run away from myself, but i don't know how. where would i go? i'd have to forge a new identity. pick up a new skill. stop asking for so much. otherwise i'd open my phone and the same internet would await me. the same old songs would run on loop in my brain after being jostled from some hidden corner by the right word or sign. wherever i'd go, i'd still remember that i was running. (is life just a race back to the beginning, then?)

just beyond sight

everything in life is about perspective. catch the light the right way, and you'll see rainbows.

for the last 6 months i've been working diligently to learn 3d modelling. i took a bit of a step back from the art styles i was working with before - i wanted to learn a hard skill. challenge myself.

ghosts, rainbows, the divine or whatever

i still have a long way to go to achieve what i want with that - which is to be able to create entire scenes in quasi-realism. six months is not a lot to learn how to build entire worlds.

putting my other work on the back burner has been necessary, but i've also missed it. i was putting myself under a lot of pressure to create, and create often, but with 3d this pace is much more difficult.

i'd also taken a bit of a break from using ai in my process - but to me, ai is a tool i can use as a backdrop for my thoughts. it allows me to create on a different level, not focused as much on the aesthetics, but rather on the imbued meaning.

i'm still figuring out where i'll go with everything, but i will start re-integrating the diary-style pieces alongside the other streams i have going. the tea kettle squeals when i put something, anything out there.

more room to breathe.