🇬🇧🇺🇸 SpamLabs: Some more validation done (or not?)

After recently pivoted the direction of SpamLabs from being an email address analyzer API to a spam filter targeting cold emails, I decided to continue validating this new approach.

The problem is real, I am dealing with it myself: on my work email I am getting some annoying cold emails, trying to sell me all kinds of services: outsourcing, SEO optimization and whatnot.

In the outreach I did previously with the initial idea of SpamLabs, an usual response I was getting was in the lines of "I don't have a problem with fake users, but I am getting spammed by bad grammar SEO sellers all the time". So that was the direction I went by.

How big is this new problem?

Only to find out later than even this problem isn't that big. These cold emails are at best a mild annoyance, and they appear after a certain scale and only if you expose your work email address on a website somewhere.

And in the best scenarios, people are dealing with a few of these cold emails every day. But they get resolved in a few seconds.

So they would in fact save a few seconds of work every day, maybe a few minutes per year. But to do that, you would have to pay at least 20€ per month (a hefty price point, will talk about it later), and offer access to your whole email inbox to some 3rd party. Not a very appealing proposition.

The price

During my product iterations, I integrated ChatGPT to aid in determining whether some email message is trying to sell something. It worked somewhat well, the only problem being that it sometimes changed its answer, depending on how obvious the sell attempt was.

So for some messages, calling the OpenAI API 5 times with exactly the same content, I would get 3 positives and 2 negatives, or other combinations. It was not an exact science.

Do mitigate this, I started doing multiple OpenAI calls for the same message and computed an "average" of positive rates for the message. If it was above a threshold, it would be flagged as being a cold email. Otherwise, it was either negative or ambiguous.

Doing these repetitive checks meant that the usage of the OpenAI API will quickly scale up. I did some estimations, and for the small plan of 1000 checks, I would end up spending ~12€ for the OpenAI chat, in the most pessimistic case. And that is with the gpt-3.5-turbo model, the cheapest one. I would use gpt-4, the cost would ramp up to about 60€ per month. HUUUUGE costs in both cases. The cheapest plan I could offer was 20€ in order to also cover the database and other hosting costs, plus labor. And with these costs I was doing 4 checks for each API call.

With some more optimistic scenarios, I could bring the costs down to about a third of the ones mentioned above, but the average would be somewhere in between.

So... not great...

The validation?

At this point, with the second round of validation for SpamLabs, I have to say that even though the problem is there, it seems to be more like a slight annoyance rather than a huge problem needing solving. So the demand is very small, and I wasn't able to get some excitement out of anyone. At best some courtesy "nice product!" from some fellow indiehackers, and some "meh" attitudes from potential customers (which I would have thought to have high chances of converting).

The truth is that I didn't have the final solution in place, just an API, but even though, the idea itself wasn't sparking a lot of genuine interest. Nor the talks about the problem itself. It was all pretty "meh" and dull, so not a great sign.

At first I was excited to seeing there are not many competitors (or any competitor), but now I am starting to understand why: the demand just isn't there.

I think I am ready to close down this chapter, and I am pleased I reached this conclusion without purring hundreds of development hours into it (something that I did with other attempts). I put up some MVP, and looked to start conversations about the problem. Learned a lot from this experience, and I am looking forward to the next one!