> And that as soon as migrations happen, your storage costs will balloon, so you need a billing strategy on launch.
Unless people somehow figure out a way of hosting stuff somewhere else than Amazon/$host_that_charges_per_mb_transit (Hint: they exist)
Considering it would have to be a lean operation (assuming bootstrapped), then figuring out basic stuff like "We don't want to pay per MB sent" should be a pretty high requirement.
I don't think you'd have to consider migration all the data from Pivotal, but lets assume 10% just in case? Lets say that's 100TB in total (on disk), which you could host with 10x storage boxes from Hetzner, 24 EUR each per month, so 240 EUR in total, which includes 10 unmetered connections (1 per box).
> I don't think you'd have to consider migration all the data from Pivotal...
I do. You might not have demands to migrate all data from all of your potential customers, but far, far more people than you might expect treat their issue tracking system as a system of record and external memory for a HUGE assortment of things.
One hugely (and obviously) useful query chain that such a system answers is "Hey, this customer problem sounds familiar. Did we investigate it before? Did we solve it? If so, how? If not, why not?". For long-running projects, it is impossible to select the correct 10% of data to retain to also retain the ability to reliably -er- service those query chains.
Obviously I meant 10% of all customers would hypothetically migrate from Pivotal to this new imaginary service, not that 10% of the data from each customer would be migrated... So 100% of the data migrated from 10% of the Pivotal user base, pretty generous assumptions I think.
Respectfully: if it was obvious, I wouldn't have come to the conclusion I did and written up what I wrote.
> So 100% of the data migrated from 10% of the Pivotal user base...
Yeah, maybe. I don't know how large the slice of the Pivotal Tracker userbase you'd be able to retain even if you had a perfect clone. I bet it would be notably larger than you imagine it would be... it's my understanding that it has some pretty rabid fans that used it.
> Respectfully: if it was obvious, I wouldn't have come to the conclusion I did and written up what I wrote.
Sorry about that, I think I assumed some familiarity with moving data around/migrations, and moving 10% of a customers data around from a legacy service to new service wouldn't make much sense in that context.
> I bet it would be notably larger than you imagine it would be
I think being able to capture 10% of existing users is already a very large guess, realistically it would be closer to 1%.
But, without any numbers from Pivotal and actually trying to launch a cloned service, all we can do is guess :)
> ...I think I assumed some familiarity with moving data around/migrations...
I am familiar with this sort of thing, yes.
I'm also professionally familiar with people who seem to think that it's totally acceptable to obligate folks to throw away large fractions of their valuable historical data in the name of cost savings. "Surely you can identify the most valuable 10% of your data!" they say.
Given that I don't know you and what you know, and given that I've encountered a shockingly high number of these fools with a fetish for data destruction, I chose to expect the worst from your somewhat-ambiguous statement... which would ensure that at least one of us learned something, regardless of the truth of the situation.
> I chose to expect the worst from your somewhat-ambiguous statement
Yeah, I noticed that too. Not that my feelings are hurt or anything, but you might end up in friendlier and more productive discussions if you try to stick to the HN guidelines, which includes:
> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
> ...you might end up in friendlier and more productive discussions...
Was this discussion unfriendly?
As for discussion productivity: I know that -historically- I've spent TONS of time going round in circles because of unexamined incorrect assumptions that crop up when both sides "steelman" the others' arguments, rather than speaking plainly, clearly, and politely about what they believe their conversation partner to have said. "Steelmanning" can be an acceptable backup strategy, but -IME- speaking clearly and plainly is the strongly preferred strategy between conversational partners who can remain civil.
I assume folks can remain civil in the face of polite questioning and assertion, and switch strategies if it turns out that they can't. To do it in the reverse order is just much, much slower and error-prone.
> Please respond to the strongest plausible interpretation of what someone says ... [a]ssume good faith.
Thing is, that was the strongest plausible interpretation of what you said. I assumed that you were making totally good-faith statements based on your background and structured my reply to be both polite and gather the most information reasonably possible about which of the totally plausible backgrounds you were speaking from with the fewest round trips. Had I not done this, and had you actually been one of those fools who revels in data destruction, we would likely have gone several rounds in mutual misunderstanding, rather than the half-round of solo confusion terminated by your reply that immediately cleared up the misunderstanding.
Anyway... if you have a couple of (6+) free months, you should TOTALLY clone Pivotal Tracker. IMO, the two HARD, HARD parts will be to replicate its ability to work offline, and its ability to integrate incoming changes from the server with unsaved changes on the client. Whoever wrote the data handling system for that program did a really, really, really good job.
> Respectfully: if it was obvious, I wouldn't have come to the conclusion I did and written up what I wrote.
I dunno, that felt obvious to me. Both the idea that you’d somehow manage to get all customers to migrate to your new service, as well as that they’d migrate only 10% of their data sound preposterous.
> ...as well as that they’d migrate only 10% of their data sound preposterous.
Ah, I might be unduly affected by some big data (not Big Data, mind you) migrations that I'm currently involved in, where the Powers That Be are telling us that we have to throw away a huge fraction of our historical data. Well, that and the many times we've had to fight beancounters who popped on by to demand we save the company what amounts to pocket change by throwing away tons of historical data.
(It's flabbergasting how beancounters tend to ignore the price of programmer time when making their cost-cutting spreadsheets.)
It might not be as much as one would think. I just looked at their export page and you can only get 6 months of project history data out of their system - I'm guessing that means comments.
Both OVH and Hetzner offers unmetered connections for their dedicated servers, only had good experience with both so far (besides when one of OVH's data centers burned down, but hoping that was a exceptional situation)
Backup to Backblaze B2, or, depending on architecture, rely on their object storage for hot data (depending on data cache and tier requirements). They partner with Cloudflare for free egress (on the Backblaze side) of public content as well.
Cloudflare’s subscription agreement for self-serve accounts limits serving non-HTML content, including "video or a disproportionate percentage of pictures, audio files, or other non-HTML content."
Which seems to be a fine fit for a project management SaaS solution. If you have an origin with non text content, you can front it with Fastly or pay Cloudflare something enterprisey (which you should be able to do once you have traction). Regardless, this is an inexpensive content distribution and object storage architecture available vs AWS egress costs.
I'm fairly sure Hetzner only host their dedicated from Germany or Finland, yeah. But OVH has dedicated servers in Europe, America and Asia if I recall correctly.
Unless people somehow figure out a way of hosting stuff somewhere else than Amazon/$host_that_charges_per_mb_transit (Hint: they exist)
Considering it would have to be a lean operation (assuming bootstrapped), then figuring out basic stuff like "We don't want to pay per MB sent" should be a pretty high requirement.