it's been bothering me for a while that the current "meta" in the digital-art-on-the-blockchain world is to give your work away for, essentially, free. i was trying to make peace with it. because let's be real, there's a bit of a stockholm syndrome thing going on here. i have audiences on platforms like zora and rodeo, and i'm scared to bite the hand that's been feeding me (when you're starving, the smallest crumb tastes good). "if only i could grow my following on those platforms," i'd tell myself. "then, maybe, maybe, i can find 'bigger-deal' opportunities that lead to income."
two months ago i started a new job. trying to "make it" as an artist in this space was becoming too stressful. i needed a break so that art could become fun again, and so far, that's kind of been working. but it also made me realize that i don't really have to put up with this shit, if i don't want to.
"mint anything," "onchain instagram" (because instagram is totally something we want to replicate), "don't overthink it." bröther, i am overthinking it like hell.
free mints do not primarily benefit artists, aside from potentially exposing their work to a wider audience. free mints primarily benefit the platforms that the works are being posted to. they encourage an inflated number of transactions that those platforms can then use to show how well they're performing. (look mom, we're helping create shareholder value.)
i don't mind the "mint anything" idea, per se. but we started participating in this game before we knew how the rules were going to change. with zora, we used to have so much more control: we could decide how long the mint would be open, or close it manually if we wanted. we could limit the number of mints per address. and we'd get a few dollars per mint, which was...a slightly bigger crumb. there were some great projects that worked really well with the time-limited-mint feature.
then, rodeo (a platform launched by foundation) came up with even cheaper mints, zora followed suit, and the rest of us started crying. i've spoken with other artists, and the general sentiment can probably be described as "glum disillusionment."
while "overthinking it," i came up with the following list of options for myself moving forward:
1) keep going as i am and eventually mint "higher value" pieces as 1/1s on other platforms (that is, works that require more effort, like 3D pieces)
2) boycott free mint platforms entirely, potentially to the detriment of my onchain career
3) shift to minting a different kind of content on these platforms - only works in progress, behind the scenes, etc.
4) compromise. instead of minting the entire piece, what if i were to just mint part of it?
as regards 1), i don't really want to keep going as i have been. i hate that i can't close my mints manually anymore on zora. i could just use rodeo instead, as the time limit is set to 24h, but i can't add much of a description there and the creator tools are worse. plus, it feels like such a slippery slope. if i value my work at 50 cents now, how can i justify wanting to charge more for my other work later? when will i, as an artist, suddenly become "more valuable"? 2) i am not ready for a full-scale boycott right now. people see my work being minted, people check out what i'm doing, maybe they like it, maybe they follow me. 3) i don't really want to mint my wips and behind the scenes?? i don't think these things need to be onchain as nfts. (ymmv)
but is option number 4 a cop-out?
my thought process is that this way, my work still gets more eyes on it — while i maintain control over the actual piece it is derived from. if someone were to want to own it digitally, they could contact me and we could arrange it.
will this have repercussions for context? is this a weird frankenstein way of going about it? probably. will i change my mind again in a month or two? very likely. but the fact of the matter is that there is no "right" or "best" way to do any of these things, and we're all just stumbling along more or less in the dark, feeling the wall for the damn light switch (it's got to be here somewhere).
i just feel a bit frustrated, and i know i'm not the only one. these models incentivize the consumption of art as a quick bite, something you scroll past once, double-tap, and never look at again. meanwhile, your wallet is getting bloated. these models incentivize the creation of art as something quick, low-effort, and easy, because why pour hours of your time into making something with such a low ROI?
please, tell me if you think i'm crazy and looking at this entirely the wrong way. i am open to that possibility.
regardless, i'm going to give it a shot.
i really like making video works but they're so annoying to optimize and get "just right" that i rarely do. here's a still from a short ai-animated piece i made today:
as you can see, it gets murdered by compression on zora, and everywhere else on the internet:
that's alright, it's experimental. as is the entire digital/nft art scene these days, it seems. after foundation came out with their "free mint" platform, rodeo, zora quickly followed suit to reduce its mint price, too. a lot of artists are mad about it, because there is basically zero way for us to make any money off of this. it financializes our work without a clear good reason.
i'm trying to see these platforms as part of my "marketing budget" — they allow me to reach a new/wider audience. and maybe "we're so early" that it won't matter much someday, and maybe some of these pieces will have monetary value, in the end. time will tell.
in the meantime, if we actually want to sell something, it had better be good. this week i got a major sale, which was a really welcome, happy surprise at a time i had started to feel extremely discouraged.
now i approach my 3d weapons collection with renewed vigor: i've mapped out all of the pieces and it's just a matter of working through them and then making a lot of decisions about rendering. i'm learning a lot along the way, and getting better at this. it's very time consuming work, but i enjoy it.
alongside that, i keep working on the internet diary. that is, pieces that contain a little part of my soul. here are some of the most recent:
i'm pleased with the way my body of work is developing over time.
i'll keep going.
i was born in the darkness, and there i was raised
—the world is only as small as your cage—
a sapling rooted under cover of brush
is doomed at birth to turn to dust
but bless those hands that peel away
what otherwise tangles and decays
the gentle slope of a sunlit beam
is enough to unfurl dormant dreams.
—
title: press escape?
format: 4k mp4
duration: 17sec
process: fully created and animated using blender
by tinyrainboot, 07.2024
—
available as a 1/1 on the new lens-based platform, mystic garden: https://www.mysticgarden.xyz/gallery/0x012a99-0x046c
so pleased to present to you the contours of a dream, which is live-minting on https://playground.ink/tinyrainboot/ for the next 2 days. that means that every piece is unique and generated when you go to mint it!
the contours of a dream is about the murky spaces that comprise the human psyche: what secrets lie in the unexplored corners of (sub)consciousness?
the collection follows the path of a faceless figure wandering through chaotic and confusing states of mind, ever in search of scarce moments of tranquility.
these dreamscapes, “imagined” with the help of AI, encourage the viewer to reflect upon their own disposition. awake or asleep, where does your mind take you? what is it trying to say, and are you listening?
read the full write-up: https://paragraph.xyz/@tinyrainboot/the-contours-of-a-dream-coming-to-playgroundink-on-july-16
and watch the trailer:
check out all the pieces minted so far: https://playground.ink/tinyrainboot/
and let me know if you pick one up <3
date: july 16, 8pm CET
platform: playground.ink on solana
price: .3 sol
the collection is limited to a maximum of 333 pieces.
set a calendar reminder: https://calendarlink.com/event/Srzuk
read more about it: https://paragraph.xyz/@tinyrainboot/the-contours-of-a-dream-coming-to-playgroundink-on-july-16
i awaken in a world that isn’t mine. eerie and desolate, the weight of a loss i can’t quite remember washes over me. i thought i knew what time was, but it has lost its meaning in a dawn that stretches its fingers toward eternity. the air is different here: sharper, thinner, lighter. can you imagine the sound of a planet where you are the only one breathing? i do not know if this is future or past, whether this place is forgotten or still unknown. my unspeakably fragile vessel of sinew and synapse is all that carries me forward. have i been here before? or am i simply tracing the contours of a dream?
about the collection
the contours of a dream is about the murky spaces that comprise the human psyche: what secrets lie in the unexplored corners of (sub)consciousness? the collection follows the path of a faceless figure wandering through chaotic and confusing states of mind, ever in search of scarce moments of tranquility.
these dreamscapes, “imagined” with the help of AI, encourage the viewer to reflect upon their own disposition. awake or asleep, where does your mind take you? what is it trying to say, and are you listening?
each generation is completely unique and created at the time of mint
the contours of a dream is a long-form, live-minted AI collection: each piece is uniquely generated at the time it is minted, tying the image seed to the blockchain transaction hash. this approach adds an element of randomness: although guided by parameters, or “contours,” AI fills in the rest of the “dream.”
collection themes
growing up as a “third culture kid,” and a "girl online," much of my work focuses on themes of connection, loneliness, longing, and what it truly means to be human — particularly in our hyper-technological era: never before have we been so connected, yet felt so lost.
the contours of a dream delves deeper into these topics, exploring dark and isolating mental states, contrasted with occasional glimpses of a potentially different, more serene reality. it asks the viewer to explore their own psyche, to take a look at disturbances just beneath the surface, and to think of ways to confront them.
if you've been following me for a while, you might also notice red threads to some of my earlier work, such as tell me, stranger — what haunts you?
whitelist (free mint) giveaways & logistics
stay tuned on farcaster / lens / twitter as i will be announcing and running a giveaway for whitelist spots (free mint) very shortly.
july 16, 8pm CET: whitelist mint
july 17, 8pm CET: public mint, 48 hours
platform: playground.ink on solana
price: .3 sol
the collection is limited to a maximum of 333 pieces.
add a calendar reminder: https://calendarlink.com/event/Srzuk
✩‧₊˚♡
i can remember vividly one of the first times i had an episode of depersonalization/derealization. i was twelve or thirteen. sitting on my bed, i looked down at my legs when a sensation of what i can only describe as a gulf between my vision and my self washed over me. "those aren't mine," i thought. "are they?"
they didn't feel like mine.
these episodes were to worsen as i got older. there were moments where i simply could not reconcile my body as belonging to me. mirrors i could control, but other reflective surfaces were enemies. my arms and legs were foreign objects. sometimes it felt like i lived in a glass box that kept me separate from everyone.
seeing myself in a photo brought on a cringe i could only define as convulsive. i did not want that to be "me." i did not want this body. i did not want this life. i did not want to be perceived. my body was a cage and i didn't know how i became trapped in it.
there's a poem i found on tumblr in 2015 (peak tumblr years -_-) that describes the sensation well. here's an excerpt:
In the dream I have a body but it’s not mine; I am an intruder wearing a suit of flesh with skin that has turned into granite.
I do not feel.
I feel too much.
There is nothing.
Everything is overwhelming.In the dream we are machines. No emotion, just flatness: programmed thoughts, automatic speech and action without awareness or control. They call it a coping mechanism, and so I think of pulleys and gears pulling me up to sit somewhere in the top of my head and watch through a frosted lens
while someone else grips the controls, moving this body through the motions of living. Unfamiliarity in familiar places. Friends are strangers and strangers are blurs of colour, dabs of acrylic against bleached watercolour. Fog fills my mind, pressing against the glass that separates me from the world.
I bang hard on it. Bang. Bang. Bang. Let me out! You tell me that I locked myself in this steel-walled room. I say why would I do that? You open your mouth to explain but I can’t hear you over the white noise. Buzz. You pinch my arm— a whisper: not dreaming. Bruises that fade to grey, cement skies, ash world.
In the dream that is not a dream, my hands turn into birds and fly away from me. You catch them and try to give them back, but I refuse. They aren’t mine, I tell you even as you push them back onto my wrists. You ask me whose they are, then. I don’t know. I don’t know. I think I once knew someone who had these hands, but I don’t know where they went.
— Martina Dansereau, here's the only link where i found the full version
why am i writing about this? to get it out. to iron it out of me. (it's almost gone.)
you might notice reflections of it in my long-form ai collection, the contours of a dream. it uses ai to explore dark and isolating parts of the psyche - scattered with glimpses of hope.
i'm curious what you'll think of it.
(july 16)
You know that feeling when you expose something personal about yourself and it is met with a wall of gut-wrenching rejection?
It was last fall that my grandmother found out I was making art with AI. That this information reached her in the first place was not by my own volition - I know who to pick my fights with. We were having lunch when she brought it up, berating me for wasting my artistic talent on machines. I tried showing her some of my art, attempting to explain myself conceptually, but she wasn't willing to listen. I hated myself for letting her make me feel so deeply ashamed of something I was excited about.
That's not to say I don't understand her concerns: AI anxiety is a real phenomenon. We are facing many huge and unanswerable questions about how AI might change the fabric of our society as its effects ripple outward. But as a young-er person, I feel more optimistic than anything else about the potential positive impacts of this technology. From medical and scientific research, to augmenting our personal abilities, to giving us more time in the day, and yes, even to art, the possibilities are tremendous.
I’ve been making and posting my art online as Tinyrainboot for around two and a half years now. Back then, I started a project called “Internet Diary,” where I used AI to express ideas or excerpts from my daily experiences. I didn't have the energy at the time for my usual creative outlets, such as painting and writing. This was something different, something exciting, and above all, something I could do surreptitiously at my work desk. Plugged into Stable Diffusion, my thoughts became prompts, generating weird and unexpected visual outcomes – and so began my fascination with AI art.
Those of us who interact with this discipline on a daily basis often tend to forget just how new it all still is. It's been a mere 3 years since DALL-E came out, which exposed text-to-image technology to the masses for the first time – in turn raising many questions about its implications for the future of our society and the careers of those in creative industries.
I'd like to contextualize this present moment by briefly discussing how we ended up here.
During a recent talk I gave at GenAI Zurich, upon which this article is based, I went into the history of computer and AI art and gave a nod to the pioneers whose footsteps went long before: Vera Molnár and Harold Cohen, for instance. There are fantastic resources out there that can summarize all this better than I can, such as this comprehensive timeline by Le Random.
For the trajectory of the AI art space in particular, though, the year 2014 marks a pivotal moment.
Generative models have existed for decades already: a generative model learns a dataset's properties and then is able to create new data that statistically fits in with the originals. But in 2014, Ian Goodfellow developed a new approach to generative models: generative adversarial networks (GANs).
The revolutionary component is this: GANs pit two nerual networks, or machine learning algorithms, against each other. One is the "generator," attempting to generate an output that matches a collection of examples, while the other is the "discriminator," attempting to distinguish between the real dataset and the outputs produced by the "generator." Through many rounds of this competition, the fake detector pushes the fake generator to improve. In early iterations, Goodfellow's GANs were used to generate simple images of handwritten characters, faces, and even quasi-photographic scenes.
In 2017, this technology took another leap forward, when a project called CycleGAN demonstrated how GANs could be used to modify images, such as converting an image into the style of a specific painter or adding zebra stripes to a horse. See where this is going?
This technology was used by OpenAI's DALL-E, which launched in January 2021 and, soon enough, brought text-to-image to the public's attention. Suddenly, the most bizarre combinations of ideas could be combined visually, brought to life by an algorithm.
Today, most commonly-used AI image generators use diffusion models, rather than GANs, for higher-quality outputs. They use forward and reverse diffusion to add and remove noise from an image. Diffusion models use an iterative refinement process and are less likely than GANs to suffer from "mode collapse," which is when the algorithm gets "stuck" creating outputs that are repeated or highly similar.
Particularly due to a lack of transparency around how image-generating models have been trained, AI art has faced harsh criticism and been steeped in numerous controversies.
The collective behind Portrait of Edmond de Belamy, the first AI artwork ever to be auctioned by Christie's in 2018, faced backlash when it emerged that they had used the code and dataset of another artist, Robbie Barrat, to generate the work. A slew of ethical, moral and legal questions are intertwined with the discipline.
But does AI really represent the end of art as we know it?
When the camera was invented, some declared it the end of art, arguing that since taking a photo required less effort and skill than painting, it was the device, not the human, that was responsible for the final image. – James Bridle, The Guardian
I know, I know, this parallel has been beaten to death already. But I believe that this comparison glosses over an important difference between AI and the camera. What I want to point out is that we face a fundamental lack of alignment when it comes to the general public's understanding of what AI art is and how image-generating algorithms work.
Researchers at the Postdoctoral Institute for Human-Centered AI at Stanford University ran a controlled, randomized study, and found that half of participants saw AI simply as a tool, while the other half viewed it as an autonomous agent, with its own belief and intent.
One of the challenges we face here is the anthropomorphization of AI. We say that AI “hallucinates,” or “dreams,” applying human characteristics to the technological process by which algorithms calculate their outputs. Studies have revealed that anthropomorphization affects trust, thereby creating an obstacle for the accountability and governance of AI systems. In short – it's complicated, and better understanding of how AI works will be needed to get us all on the same page.
Bridle goes on to say,
There is no true originality in image generation, only very skilled imitation and pastiche – that doesn’t mean it isn’t capable of taking over many common "artistic" tasks long considered the preserve of skilled workers. – The Guardian
It's undeniable that many jobs in creative industries will be affected by AI. Jobs will be lost, jobs will change, and new jobs will be created. If you're a product photographer or graphic designer, you have certainly already seen a significant impact to your industry brought on by AI.
But I have an argument to make here, which is that this bland summarization of what AI art is not only glosses over the human component, but also fails to recognize that art is often not just about the final image.
When we instead view AI art as a form of conceptual art, it becomes much more interesting in its marriage of man and the machine. The idea behind the work and the process to create are more important than the outcome – and this is where I believe our generation’s artists will find new ways of pushing the boundaries of creation.
If you consider the whole process, then what you have is something more like conceptual art than traditional painting. There is a human in the loop, asking questions, and the machine is giving answers. That whole thing is the art, not just the picture that comes out at the end. You could say that at this point it is a collaboration between two artists – one human, one a machine. And that leads me to think about the future in which AI will become a new medium for art. – Ahmed Elgammal, director of the Art and Artificial Intelligence Lab at Rutgers University
There’s a lot of fear around AI and how it will change our status quo. The rise in AI-generated content on social media has even been called the “enshittification” of the internet.
But what a lot of people tend to forget in their fear of change – in their fear of the new – is that we, humans, are the key component. We define and contextualize what is interesting to us. Justin Hanagan from Stay Grounded draws a parallel I find very interesting. He compares today's AI debate with the first time a human chess master was beaten by a computer in 1997:
What is interesting about a chess-playing computer is not that it’s good at chess, it’s that it exists at all. The interesting thing about a chess-playing computer is that some former tree-dwelling primates arranged tiny bits of metal and silicon in such a way as to coerce the universe into playing a game better than any other tree-dwelling primate could dream to. – Future Grandmasters of the Attention Game
When is the last time anyone cared about watching a computer beat a grand master at chess? Or watched two super computers fight each other at chess instead? Two-plus decades?
Humans care if something is interesting, and novelty can only be interesting for so long once there are no humans involved.
Let me reiterate: the human component of AI art is what makes it valuable. So could it be, that rather than witnessing the death of artistry as we know it, we are witnessing its modern renaissance? Instead of focusing on aesthetics, we are being given a chance to redefine the meaning of art and where we can take it. Shouldn’t this push creatives to try even harder? To find the boundaries, and new ways of breaking them again and again?
I believe that we are only just beginning to uncover what we can create, empowered by new tools and our evolving relationships with them. Exploring where this human-machine dialogue takes us is the most interesting part about AI art.
Naturally, we can't say yet how AI will impact the longer-term creativity of our society, or how it might affect our brains, thinking patterns or learning styles.
And all I can say now is that AI certainly hasn’t killed the artist yet. I have no crystal ball, though – maybe it still will.
♡ tinyrainboot
This is my submission for Claire Silver’s 7th AI contest. The final piece is an AR experience, found at the below QR/link.
Tools used (AI marked with ✩):
Stable Diffusion XL (via Replicate)✩
Pixlr AI photo editor✩
Spline image-to-3D✩
Blender
Anything World✩
Geenee AR
I have really been wanting to play with AI and 3D lately, but hadn't yet found the time, so I felt this was the perfect opportunity. After all, the instructions were to "find a rabbit hole that interests you and go down it."
My idea was to create a little 3D creature using AI and then bring it to life with AR. I haven’t done something like this before, so I started off by researching tons of tools and then narrowed it down to what I thought would work best. I had a few failed attempts along the way, but I'm happy with this result!
Keep reading for the full write-up (or consider it a tutorial, if you want to make your own☺).
I started by creating a creature with Stable Diffusion that I could use as a base for the model. You could skip this step and go straight to a text-to-3D generator, but since those can be expensive to use, it may be better to play around with the visuals using a cheaper option like SDXL. I really like using Replicate because it’s altogether rather inexpensive. Midjourney is also great for this.
Here’s my prompt:
I realized after trying this a couple of times how important symmetry is for the rigging step, so I optimized for that. I brought this image into Pixlr because they have a nice AI cutout tool. So with that, I was able to get my creature onto a transparent background with very little effort.
I also wanted it to be perfectly symmetrical, so I duplicated the side with brighter lighting, flipped it, and merged the two pieces. A little bit of gentle blurring to fix up the line at the middle and it's looking good.
For the image-to-3D part, I tried out sloyd.ai, 3D AI Studio, and Spline. Sloyd didn’t work for me because it can’t do animals yet, but I mention it anyway because it has some other interesting applications. 3D AI Studio was not bad, but my outputs tended to have extra limbs. They have a good remeshing tool, too. Spline lets you do a couple of free generations, and the initial outputs were even better than I expected. I paid for a credit top-up so that I could play around a bit more.
I pulled the transparent .png that I’d prepared into Spline. It always offers you four outputs, and the first one is usually the closest to your input image. I assume that adding a prompt here affects how the other three turn out.
I went through each of the outputs and ended up picking the fourth one because it is soooo cute and grumpy.
The preview mesh actually looked pretty good, but he (it's a he now) doesn’t have a butt and his feet are wonky.
So then I downloaded the .glb and imported it into a new Blender file. Even if you have never used Blender (it’s free!), I hope to offer you a very simple walkthrough to fix up your model - I think anyone can do this. You can also completely skip this part, but you may need to find another way to extract your texture from the model as a .png for the rigging step.
Steps in Blender:
Create a new general file.
Delete all the existing objects by pressing “a” and then “x.”
Go to File > Import > glTF 2.0
Find and import your file (you may need to add the .glb extension to the file in file explorer so that Blender recognizes it).
Click on the object once it’s been added and press “tab.”
This brings you into editing mode. Press “a” to select all and make sure vertices mode is selected (see pink arrow).
Now press “m” - a box will pop up. Click “by distance.” This cleans up the mesh a bit by removing extra vertices.
Press “tab” to go back to object mode. If you want to see the texture better, you can use these settings:
My suggestion for the easiest possible mesh cleanup is as follows: all you need to do is add and apply two modifiers.
Go to this tab on the right side (with your lil guy selected still):
Now click “add modifier” and search to add “mirror” and then “smooth.” (You can type it in.) By mirroring over the X axis, any weird deformities should be fixed as the mesh gets filled by what is happening on the other side. For me, this fixed the issue with the feet, for example. The “smooth” modifier also tends to fix any weird, sticking-out parts. You can set the factor to 1 and then the repeat to 1, 2, or 3 - whatever looks better to you.
Then once you are happy, apply both modifiers using the down arrow next to them.
If you need to make body adjustments, you can go into sculpt mode.
This is what I did to give him a booty: enter the sculpt tab, then make sure you have x-axis symmetry selected (we still want to keep our guy symmetrical as this will be required for the following rigging process). Press “g” and this pulls up the grab tool. You can adjust the strength and radius at the top. Then just tug around a bit at your mesh until you get the desired result.
Okay, should be good. Now we need to export two files: the texture and the model. The texture is an image file. Click on the “texture paint” tab. On the left, you probably see a messy-looking image. This is what’s giving our guy his color. We need to save it separately from the model for the rigging step. Click Image > Save As > and then save it as a .png.
Great, now we just have to export the model. Normally a .glb file should work, but I was having issues with the texture showing up in the next step so I exported mine as an .fbx instead. Make sure your character is selected, then File > Export > FBX > and make sure to click “limit to selected objects” and then save.
Now that the model is looking better, it’s time to animate it! I couldn’t use the Reallusion rigging tool because it doesn’t have Mac support. Sad face. But I found a wonderful alternative called Anything World that has an AI rigger. That means they use AI to add a “skeleton” to the model and then animate it.
So, it says “Animate Anything,” but there are some exclusions for now. We’re still early! On their upload page, under “model processing constraints,” you can see what is included for now. You also get a few credits to try it (I had to buy a credit pack because I did this a few more times, trying different approaches and models. But I don’t mind supporting projects like this; I’m excited about what they’re working on).
Go ahead and upload your model - you’ll need to include both the .fbx and the .png here.
Then you’ll add a name and choose a type and a subcategory if available. I ran this twice, both with “hopping bird” and “walking bird.”
It takes a few minutes to rig the model with AI. While I waited, I researched baby names and decided to name my guy Clyde.
On the next screen, you’ll confirm the skeleton, and then wait again for it to add the animations. This is the most exciting moment! The animations are ready!
Clyde lives and breathes! I was dancing around my living room at this point. It’s so cool. I almost couldn't believe it worked. You can download all the animation files or just the .glb (that’s the one you need).
Now it's time to give Clyde a home in the metaverse. After researching a few AR apps, Geenee seemed to fit my needs the best. This tool is pretty awesome - it’s geared more towards the fashion industry, but honestly, it suited this use case perfectly, too.
You can add a little graphic and description to the entry screen. I made one in Blender.
Next, click “add section” and then choose “AR build” and then “World AR.” Now we can design a full scene. You can add up to 40mb of objects, but I just want Clyde to be chilling there solo (at least for now).
Drop in your .glb file that you exported from Anything World. It's that simple.
You can click on the little plus button next to the image icon and add more scenes. I have three versions of Clyde - one where he is walking, one idle, and one jumping. So I added all three here, each as different scenes. And then you can drag and drop images onto the icons if you have multiple scenes.
So easy. You can preview and publish from the top right!
Make sure your phone or other device has camera access enabled. ☺
Now you can take your creature with you anywhere - even places pets aren’t allowed, like the supermarket, doctor’s office, or even the nightclub. Heh.
If you try this out, let me know how it goes! And if you get stuck or need help, I'm here.
♡ tinyrainboot ♡
P.S. - Thank you, Claire, for the initiative! I'm looking forward to seeing what else I can use this tech and workflow for.