πŸŽ‰ Introducing Absinthe Protect πŸ₯³

How Web3 incentive distribution is broken and why we're here to fix it.

Today marks a significant milestone for us at Absinthe Labs with the launch of Absinthe Protect – the first step in detecting botting, improving user acquisition, and aligning incentive distribution.


Incentives are a powerful Web3 primitive that have the potential to bootstrap communities, align network participants, and generate positive-sum economic value for all asset holders. This is the backbone of the modern internet: digital assets secured by cryptography which create a transparent, trustless, and aligned network greater than the sum of its parts.

But where is this promised land? Why does our industry willfully delude itself with visions of utopia when Sybil hunters consistently prey on airdrops, spammers grief messaging protocols, and botters snipe NFT collections before dedicated community members have a chance to successfully mint?

The backbone of the modern internet is being quietly yet steadily damaged by these harmful behaviors, particularly during the highs of trading and speculation. We believe that most people don't realize the extent of this damage until it's undeniable and beyond repair.

To all the hardworking teams in Web3 fighting these problems and prioritizing real user growth, we see you. You understand that short term vanity metrics rob their project of its long-term exponential success.

Yet there's so much noise (and misdirection) with on-chain metrics that it's hard to find signal. It's important to create clarity for teams so they can determine whether the work they are doing is having an impact.

This is especially true when it comes to real user acquisition, one of the largest unsolved problems in web3.

The problem with token airdrops

One of the least efficient user acquisition strategies ever (not just in Web3) is token airdrops. The cost of acquisition per user runs into $1000s and the average lifetime value of the acquired customer is below $100. Arbitrum's infamous airdrop was notoriously botted and had a massively high cost of acquisition compared to the average lifetime value.

The cost of acquisition is defined as the average airdrop amount and the lifetime value was the average amount of fees the network generated.

Arbitrum's highly farmed airdrop had a severe imbalance in acquisition cost and customer lifetime value in a ratio of 31:1.

The money spent on user acquisition had a strongly NEGATIVE return on investment. Every $1 spent on acquiring a user brought in $0.03 cents.

Let's see how this number compares to some standard rates:

  • A YouTube Ad click costs around $0.10 per user.

  • Newsletter campaigns hover around $3 per user.

  • KYC and registration require centralized exchanges (CEXs) to pay around $30 per user.

It's sad to see Web3 get used to the horrible costs of user acquisition. We realized we had to fix this if we want to distribute meaningful incentives to high quality users. If we divide the customer lifetime value with the acquisition cost, we see that airdrops are some of the worst performing user acquisition strategy, by a factor of 10,000x.

Why is it so bad? The answer is...well, it's complicated. It's true that the user acquisition costs are paid for in the project's token. But then it's a measure of unrealized gain for the community rather than realized loss on behalf of the project. This still means that the community missed out on profit due to unequally distributed rewards, further burdening the ecosystem with inefficiency.

Where does farming fit in?

As we emerged from the peak of the bear market in 2023, the reliance on bots for user acquisition was at an all-time high. With the market gaining momentum, fierce competition among projects is continuing to draw in a significant number of airdrop farmers. This scenario has led to a concerning trend: a select few players are amassing a disproportionate share of the rewards creating significant inefficiencies within the community.

The outsized rewards that are being funneled to a small group could have been more equitably distributed among the genuine and valuable members of the community. This imbalance not only deprives the majority of their rightful gains but also undermines the overall health and sustainability of the community.

Enter: Questing Platforms

As dApps struggle to engage and acquire users, questing platforms emerged as a new product category with a simple value proposition: Incentivize user engagement with rewards.

Users typically complete some actions (like following the project on twitter, making an on-chain transaction, etc) to be eligible for the rewards which are often tokens, NFTs, or XP points. Why does anyone farm XP points? Most do it on the assumption that it might make them eligible for an airdrop.

These rewards are distributed on a First Come First Serve basis (FCFS) or a lottery system where a lucky winner is picked at random.

Let's think like an attacker as a red teaming exercise: how would we exploit this system to claim the largest amount of reward? If the questing platform is distributing rewards via FCFS, we would automate our browsers to complete these actions faster than any human could. If a winner is chosen at random, we would automate the creation of fake identities at scale (Sybil attack) to increase our number of "lottery tickets" and maximize our chances of winning.

It's should be obvious that this encourages a short spike of farming usage that reverts once the quest is over.

And this is exactly what happens in the real world.

The following shows the number of unique users on Trader Joe during a quest campaign run on Galxe between 11/21/23-12/12/23

Some projects impose economic barriers to make botting attacks unprofitable. This often comes at the cost of pricing out a large portion of genuine users.

However, we found that dApps who are hungry for usage don't care where the usage is coming from. In these cases, all publicity is good publicity. All usage is good usage. It doesn't matter if the usage is fake or is at the expense of loyal users and community members.

Most projects believe that dealing with bots and increasing the usage of their platform is mutually exclusive. This is wrong.

How we built Absinthe Protect

We wanted to help projects acquire new users through incentives without pricing out users and reducing transaction volume or needlessly distributing value to low quality users. We did this by identifying real user engagement and flagging bots.

Before we started building, we set a couple rules for ourselves:

  1. πŸ“ˆ Drive More Engagement

    • Identifying bots and multi-accounting (Sybil attacks) should not mean that total usage goes down or valuable bots are blocked. This information should be leveraged to extract more usage from bots to level the playing field.

  2. πŸ“± Good User Experience

    • We cannot and should not punish legitimate users with friction to prove they're not a bot.

  3. 🎯 Accurate Detection

    • We should not compromise accuracy for privacy. We're proud to say we can achieve 99.5% bot detection accuracy without relying on any invasive user information.

  4. πŸ‘€ No biometrics, No KYC, and No Personal Identifiable Information

    • You should not have to trust us that we're holding your information secure and privately. We cannot use any tech that scans your face or eyes, asks for Google accounts, requires your government ID, or makes you register with any accredited 3rd party.

Building a product that is privacy-conscious, accurate, doesn't compromise the user experience, and doesn't reduce usage was hard. Like...really hard. However, we avoided creating something that nobody wanted by not relying solely on our assumptions about what Web3 projects needed. Instead, we engaged with our partners to gain a precise understanding of their requirements.

Inspiration has come from all of our design partners, conversations long and short, and the huge amount of feedback we've gotten over the last year. Without this help, Absinthe Protect wouldn't have become what it is today – a product that we're proud to release to the world.

How does it work?

Absinthe Protect meticulously evaluates numerous behavioral and environmental indicators to authenticate user sessions. Scripts, automated browsers, and other botting tools leave residues that are hard to hide.

They are also hard for the host website to detect because they are not exposed through the Javascript API and are unique to each browsing device.

We scan and run analysis on over 70 unique signals to infer if a particular address is acting maliciously and if we've detected it previously. We continuously update these models to stay up-to-date with the latest techniques and obfuscations.

Absinthe Protect's Use Cases

We've worked with a number of projects to drive more user engagement, maximize marketing impact, and reduce the amount of spam.

Questing Platforms

We're partnering with the top questing platforms to bring Absinthe Protect as a quest action. It is now possible to extract value from bots without hurting community engagement. Absinthe Protect enables projects to assign more quest actions to bots compared to real users and identifying the most valuable users after the quest.

Bot-protected and Anti-Frontrun NFT Mints

Bots be used to purchase newly released or underpriced NFTs faster than human users can. This practice can result in scarcity of supply, inflated prices, and reduced accessibility for genuine collectors. At times, this leads to the collapse of entire collections. Absinthe Protect creates bot barriers for minting in real-time to drive true hype to the community without fear of a wrecked launch.

Airdrop Eligibility

Airdrops currently suffer from a host of issues: from Sybil attacks, horrible acquisition cost, and prioritizing short-term growth rather than long term value. Absinthe Protect helps create allowlists from quests and our internal usage graph to maximize long term success and ease fears of token dumping.

Spam Filtering For Decentralized Messaging

We partnered with XMTP and created an on-chain reputation system to reduce spam and give reputation to new accounts with little on-chain history. Absinthe Protect + Zeekaptcha helps prioritize new users while reducing network wide spam as a interoperable standard for messaging applications.

Protected Account Abstraction Gas Paymasters

Paymaster services aim to improve user experience and attract more users without requiring users to pay for gas. However, paymasters suffer from the free rider problem when bots can consume "free" gas without adding real value, or grief the gas tank to prevent others from accessing the resource. Absinthe Protect safeguards against the exploitation of gas paymasters, thereby ensuring a consistently gas-free experience.

On-Chain Faucets

We built the first protected on-chain faucet that doesn't require users to beg for testnet tokens with retweets, follows, or likes. Absinthe Protect keeps testnet tokens easy to access without the risk of a drained faucet.

How to get started

To see Absinthe Protect in action and integrate in less than 5 mins, check out our guide: https://docs.absinthelabs.xyz/docs/getting-started

For help, reach out to us: team@absinthelabs.xyz or message our telegram group: https://t.me/absinthelabs

Absinthe Labs logo
Subscribe to Absinthe Labs and never miss a post.
#blockchain#web3#security#user acquisition