It is with a surreal sense of melancholy I announce that on July 15th, 2020, I will be shutting down the last of our user hosting. It has been a long, winding journey for my peers and I.
The research projects
and personal things will continue here, but it’s time. There are a few branches at the root of this decision: the departure of customers due to their own ends of operation, prospectives that do not follow through with basic requests for information such as amperage requirements or IP justifications eating time I could spend elsewhere getting a serious client, and ultimately COVID causing an inability to spin up new customers.
This marks the end of an era for me, and I’ll spend the rest of this post
looking back on this journey.
I hosted my first website on the public internet back when I was in Form IV of schooling. I had made my first web pages in the late 90’s as I was required to learn HTML in primary school. This public page was different though, as it was meant for engaging people rather than a personal project. Before Facebook was public, and privacy was non-existent on other media, I had decided to setup a forum for my class of 40-odd schoolmates. It was a safe haven for us: no teachers, no parents. We could be ourselves and talk about life, the universe, and everything. Back then I had enough linux under my belt to run some distributed computing but never learned much about the network protocols that backed my Beowolf cluster and 3d rendering farms, as various things abstracted it away. This would ultimately prove to be a changing experience, as I was caught trousers down with a Security Incident. For various reasons, mostly laziness on the sysadmin side to focus on the PHP coding and re-purposing an existing machine in my parents bedroom, I loaded a Windows XP system with a WAMP stack. I logged in one day and I had a ton of alerts from the system anti-virus. It was talking about all these ports being scanned from the LAN and network attacks. I panicked; to be perfectly honest, netsec wasn’t a skill of mine at the time. I was more befuddled when I logged into the system that was supposedly breached to find not a shred of information. In the resulting panic, I did some stupid changes and ended up bricking the xp box. I had the SQL backups in hand, and decided it was time I learn linux for more than just abstracting away the computational tasks from the application level, and start getting more into network services. As curious as I was even then, I had a serendipitous find of a Linux distribution with a focus on security. Neat!
That distribution was Backtrack
(then r2 or r3 I believe). As someone unfamiliar with it, I tumbled down the rabbit hole all exploring it. Whilst it dawned on me during the next hour of playing with it that it was far from a secure operating system itself, it was excellent for me to learn about the network security I clearly lacked. In a bit of passion for a new interesting thing, I decided to install the LAMP stack on backtrack anyway and use it as a jumping point to learning AppSec and NetSec. The first challenge? Learning some intricacies about memory paging, chroot, and how to migrate a live-cd operation of a website pushed into production mode without taking the system down.
That last part was, admittedly, foolishness on my part for not paying attention after getting caught up playing with the tools in the live CD. For you see, after playing with those tools I set about my original task of installing and pushing the site back to prod… I never stopped and actually installed the operating system. Well, by the time I realised this schoolmates were already chatting away. F*&#ing hell.
Some of you that caught the Backtrack version may already be having a laugh at my expense, others may not realise how much I screwed the pooch: in those days, Backtrack did not have an installer. To make a persistent install required getting OS contents onto disk, remounting runtime directories and popping chroots out of memory back to disk. Not too bad, except I didn’t want to take the system down. This meant learning about file handlers, process deadlocks and spinlocks, etc for dealing with active mounts for processes. This actually ended up being a useful skill, as I’ve used it in red teams extensively for sneaking data onto NFS mounts with hot-mounting points to avoid taking offline services that are talking to the same mounts. But I digress, onto the show.
Fast Forward
to Form VII. I’m at the point where I’ve been evading most of the administrators at my school so I can get my schoolmates football fix on the computers, running a shadow IT WiFi network for games when the monks looked away. I’d taken to optimising the networks of family and friends for some stuffing for my coffers in my spare time. As graduation came around, two fellow nerds from a nearby school schemed with me on a business idea. Content Management Systems had been growing in popularity, and there was money to be made in building out sites for them. We setup a Partnership under the name Geeks at Work Solutions. It was pretty easy work for the most part: we’d get media and design desireables from the client, we built the site and templated things out, did custom integrations and styles as needed. Simple stuff.
Well, after we started having 2-3 projects going at once the services of GoDaddy really lost their lustre. I looked at some refurb server gear at a local shoppe, and decided to drop a box in a Colocation facility. It was cheaper for us than the GoDaddy garbage, we had better control, and we could actually test mail functionality without hitting walls of bloody shared IPv4 on those systems. This made for an interesting ask of us. As we started handing over sites developed on this gear, customers started coming back saying it was faster with us than anywhere they tried. We got asked to host and, after some optimising and tweaking for multi-tenancy, we started offering shared hosting. Before we knew it just that alone was paying all of our costs. Requests came in from one customer to host their nephew’s or someone’s minecraft server. We gave it a VM on the box and apparently we were the talk of his friends, and things just grew from there. To spread our name we started hosting F/OSS mirrors and Speedtest.net. For some time, in Dallas, one downloading firefox would get a notice “This download provided by Geeks at Work Solutions” and we got some business from it, but never did determine just how much exactly. Either way it was a good way to give back to the community as I studied more InfoSec in the background.
We moved out of the by-the-u colos and settled in a facility with a full cabinet. From here we started setting up Orchestration, hypervisor clusters, SANs, etc. We kept things fairly small; this was just a side gig. We weren’t even doing web development or CMS work at this point; some nerds were paying upwards of $200/mo for minecraft servers that could run with 100 players at once in creative mode.
The milk overfloweth.
And thus was the first test of commitment: we married our cow.
All of our servers depended on Bukkit; a community project that, out of the blue, basically shuttered and one of the core devs started issuing DMCA notices against their code in the repository. There’s a long backstory to this but, as a side gig with no legal team, it wasn’t worth the risk to us for offering as a service and use the various integrations and management consoles that now had drastically changed their license schemes. We let existing customers stay, and pivoted to offering generic VPS services. These were not nearly as successful, but there was enough business to keep the lights on. We used the excess capacity for personal projects: I did training and research on InfoSec, I let some buddies have space for dev work, and I expanded on hosting mirrors for various F/OSS groups. We dropped Speedtest.net because they started demanding dedicated 10G and we weren’t going to pay for that just to have our name on their board.
This was the status quo
until about 2017. I was out of university, and the ethereum mining craze was having another bout. I arranged with the colocation facility a cage with a number of high power drops and charged miners for stable power and networking. We didn’t mine much ourselves, but the miners paid for the cage. We had an arrangement with the facility to drop the cage when the craze died down on a non-contractual basis. But with this extra space, one of the partners wanted to make the hosting a serious business and more than a passive side-gig. This was already complicated, as one of the three of us was in med school and the other had been passive due to commitments from their own consulting work. We settled on a goal: I was to develop better customer management and automation for most of the typical tasks, and he had to get up to speed on networking and our environment. I invested personally into the business with new hypervisors, licenses for management software, new SANs, bandwidth, etc. It was a pretty penny, but I considered investing in my business something to pursue.
We get a few months into the build out, and I grow concerned. The partner was behind on their training, buried in their consulting. After trying to push through it happened: over dinner with some friends, he throws a lightning bolt into the discussion. He was backing out of the hosting business, simply because he’s sick of it. Just like that, I had no partner left that could be the second admin; I was a truck factor of one, and even for a small shop it wouldn’t be acceptable for any of our customers. Having contractual obligations to some users for disaster response that can’t be met, they had to drop. Without the second admin it’s not worth pursuing the expansion. I decide to cut my losses, as I’d only be digging deeper into the Geeks’ grave. With more customers gone as I start pushing them out, the mining craze ends for that season. It also happens that the datacentre I was in lost quite a bit of money on the craze so, despite the several breaches of SLA and contract they’ve made with us, decided to fight for paying additional months on a cage that we had in writing we could leave with two weeks notice. Well, this would have needed to be fought as a UDAAP lawsuit since the fine print says one thing but all the statements elsewhere said another. As I’m fighting this, with finances out of pocket paying that datacentre bill in the interim since the partner refused to cough up their personal liabilities (as is the nature of partnerships in the states), the next life changing event happens.
At this time, I was working for a cybersecurity consulting firm. It was good money, I just moved into a better apartment, gotten a cat. Then the manager apparently decided to cover up some unethical practices by using his ability to threaten to fire me if I don’t participate in said practices. With the head of HR on maternity leave there was no one for me to escalate to. My attorney looked at the information for the incident and advised me to walk immediately; as in same day, if I spent another hour at my desk working for them I could be implicated. So, the principal consultant for this cyber security firm on a massive staff aug contract with a dozen people just walked out of a client office. Suddenly without income, still with a several thousand dollar per month hole-in-my-skirt, I have to make some critical decisions. I ended up settling with the datacentre provider because I couldn’t afford the time to fight it when dealing with the issue that came up with my employer. I moved the remaining servers and clients to a new LLC, and dropped them into a different facility with much smaller, but more manageable, resources. These 5-odd remaining clients weren’t much, but they paid for the rack I used to continue training on my goals and performing independent research.
This was the birth of Hacking & Coffee
At this point my hosting wasn’t for businesses, or for profit. It just paid for me to do cool things in cybersec. I was up front with my customers about this; there were no rules as long as I don’t get complaints from my carriers. The performance was raw, the bandwidth unmetered. They were given cut-throat rates. At this point in time about 3/4 of my rack were sublettors. Why did I sublet that much space for only using 1/4 rack myself? I needed multiple carriers, and this was the bulk of the cost. Most of my personal research at this time involved BGP on public networks. Whilst the project bore some fruit, I ran into roadblocks involving a conflict of interest with my employer and the research was put on hiatus. I resumed this research much later, and will wirte about it here soon. I additionally used the mass of IPs I acquired to help with setting up C2 forwarders and test environments for non-attribution. I used the excess hypervisor capacity to spin up an EDR test range for payloads in my red team work. It was bliss. I never expected it to last forever, but I did at least hope that those remaining clients would stick around. As finding a new customer to replace one isn’t easy when there is no client management, no space for them to grow to several units of rack space after dropping one. The users I had knew what this was, and so did I.
I had the first drop in 2019. The user had personal financial trouble, and that’s fine. I ate the difference, but it was about 10% of the rack cost now onto me. As another dropped it was a similar situation. I kept running it anyway since I could afford it.
Then COVID came along
I usually could get a new customer for a while to fill the void, but with COVID datacentres in this region started putting moratoriums on non-critical repairs. This means no new customer drops. I wasn’t able to bring people in, and others left because their own lives were upended by the pandemic.
Simultaneously, my funds for research started getting cut back as an austerity measure from my primary employer. At this point I’m holding for a few months whilst I try to find some new customers that can commit once hardware drops can happen. I’m unsuccessful, not because of lack of interest but the demographics of those that contact me. Most are sysadmins who seem to have never planned out a colo. Stumped at determining amperage requirements, how to plan for spare hardware for a rando NOC tech to slap in, and whether their systems had the right voltage power supplies, (seriously; it’s actually kind of impressive to find servers with 120v only power for a world dominated by 208v in the states). I’ve wasted an number of man-hours fielding those requests. I never respond to larger businesses, since I know I couldn’t support most of them anymore.
With only these types of prospectives, and COVID dragging on, I made the decision. Hosting by small groups had already gone the way of the dinosaur, and cloud prices have dropped dramatically since I started this long journey from being 10x costlier for the resources, to now being $5 VPS instances. I resolved to keep some systems for research and personal things, but all the customers have been given notice.
It’s been a wild, weird, and ultimately insightful journey. I’ve honed skills, made industry contacts otherwise impossible, and I’m known by many people as “that wolf with the really fast Arch mirrors.” I’m proud of the work I did and despite the ups and downs,
I’ll miss it.
I’ll continue to host the F/OSS mirrors, though they may not be as fast as they once were. At the very least I still want them for myself, but might as well let others have at it.
So long hosting, and thanks for all the phish,
~H
P.S: for those already asking as they’ve heard the rumors about this, I do not know of what that one big mail-business client of mine will do. I still don’t have a plan from them, and we’re 1 week out now. I’ve already had to disconnect them once; I wouldn’t put it past them to have it happen again.