Icicle Lemonade Recipe
Carlos Reyes
2023-08-20
It is time to revise my reliable rule for estimating software projects. It served me well for many years. But it does not cover all important cases.
My rule is simple. Take your best guess and multiply it by π (Pi). Very scientific. Use as many digits of Pi to get the precision you need for your project proposal.
I am ready to announce version 2.0 of my rule. Multiplying your estimate by π (3.14159…) works if you are reasonably familiar with your tools, have fairly solid requirements, and the quality expectations are average. If you want to do extraordinary work when a project has many unknowns, multiplying your estimate by ten is much more accurate.
I am dead serious. We tend to overlook the time needed to come up with a coherent practical design. We forget about the time spent chasing technical dead ends. We dismiss the time to learn new tools and missing skills. We overlook rewrites due to requirements changes. We have no idea what problems we’ll run into with third party libraries and tools. We gloss over the time needed for testing. And we assume documenting the system will be somebody else’s problem.
Stormy Clouds
Perhaps an explanation on how I ended up revising my rule is in order. I just spent a year and a half working 80 hours a week on my new company, Giopler1. That works out to about three person-years, whatever that means. My original estimate was three months at a leisurely 40 hours a week. So yeah, my estimate was ten times shorter than it actually took to implement.
It was early 2022 when all of this got started. Two of my employers had folded within the previous six months. Everybody was predicting a recession, or at the very least, a tech recession. I wondered if the companies that were still hiring had just not gotten the memo. I felt like I was holding a bag full of lemons and I didn’t know what to do with them.
How Giopler Got Started
At first all I was going to do is create a C++2 library for collecting profiling data and saving it as a CSV3 file. I wanted it to be really easy to use. You would then import the data into a spreadsheet to see the results. I figured a couple of months to write it. Give some time for the job market to get sorted out. And I would get to work on a fun open source project in the meantime.
Then the design started to get complicated. I wanted to collect more and more data from the running program. A contracts library felt like a natural addition. API4 calls to support tracing made a lot of sense. I switched the output to JSON5 so I could have hierarchical data, like stack traces.
At that point, a spreadsheet was not powerful enough to create the data visualizations I wanted anymore. I had reached the point of no return. The only way to turn my vision into reality was to have a commercial product with a cluster of beefy servers for high availability and performance.
I also could have walked away from the project at that point. I had very little code written. Most of what I had done was design work. But I still felt I had a great idea: a very fast and very easy to use C++ profiler. I could have used it for at least the last decade of my career, and probably for much longer back. I just had to do this. So the design work continued.
Icicle Graphs
I have been a big fan of flame graphs for years. They are great for visualizing how performance affects a hierarchy of function calls. But I hated how slow they were to create and how you could not read the function name labels. I turned them on their side, called them icicle graphs, and figured out how to make them fast. Like, really fast.
Creating the icicle graphs turned out to easily be the hardest part of Giopler to design and implement. You can start the visualization hierarchy at any function, not just the function call roots or leaves. This is incredibly powerful. I am not aware of any other implementation letting you do this.
I was getting really excited at this point. I could do this! But, I know what you are thinking. This guy must be loaded to be able to just take off and work on a project like this full-time. Am I wealthy? Depends on your definition. I wouldn’t say so. Do I drive a new car every year? Definitely not. Do I live within my means? Definitely yes. Do I live modestly? I would say so. Do I carry credit card debt? Not for a few decades now. Should I have taken a couple of years off to pursue an amazing project like this one? I had to.
Customer First Design
As all of these design ideas started to swirl in my head, I knew it was time to stop and think. Other profilers are hard to use and slow. I knew I needed to nail those features. I spent a week defining the public API on paper. I simplified it until there was nothing left to take out. I renamed the functions over and over. They had to be self-documenting. I am very proud to say that early design work paid off. The public API is still virtually unchanged from that original design before I implemented the system.
Creating a header-only library with minimal dependencies was a given. I leveraged C++20 features to make it easy to compile and to have it disappear when not needed at runtime. I exposed Linux6 Performance Monitoring Counters (PMCs) in a super simple and easy to use C++ class. I don’t like asking stupid questions, much less of customers. The library figures out all sorts of information about your computer and your program completely behind the scenes.
Software Architecture Approach
Blue ocean projects have the advantage of getting to choose which technologies to leverage. This is both a curse and a blessing. Making a wrong decision here can easily become a costly mistake later.
I know what you are thinking: worry about the requirements you have, not the requirements you are imagining. I did. But it would have been foolish not to keep an eye on where the project was probably headed.
This was a one person self-funded project. A major rewrite because of a wrong decision is simply not in the cards. I did not know in the early stages exactly in which direction it was headed. I had a lot of sleepless nights.
The trick is choosing technologies that work well together and are mature but not getting “moldy” yet. This is of course done with a huge filter of my experiences and personal biases. So don’t sue me, bro. If I am going to dedicate most waking moments to a project, I better be enjoying it.
I like to design top down and implement bottom up. Hope for the best, plan for the worst. Attack the biggest question marks first. I created a directory with a few dozen software experiments. These are short throwaway programs for trying out different ideas quickly. Make sure the idea works first, then scale it out by reimplementing it using production quality code.
Alternatives Considered
I dismissed cloud servers out of hand. Whether Amazon Web Services, Google Cloud Platform, or Microsoft Azure. It takes careful management to make these services cost effective. If you are a small company and are using a cloud provider for non-trivial computing tasks because you got some free usage credits, you are being short-sighted.
I considered all sorts of virtual environments. Docker7, Kubernetes8, Kvm9, ProxMox10, OpenStack11, VirtualBox12, and Xen13, among others. They all have their valid uses. Ultimately, they felt like complications I would be better doing without if I could manage it.
I had decided early on to use PostgreSQL14 for the data storage. PostgreSQL is a fantastic single computer database server. It is somewhat less good as a match when one needs a high availability cluster.
Cassandra15 was an intriguing choice for a distributed database system. But the single server performance is lacking for my taste. ScyllaDb16, a rewrite in C++, was very tempting. But it felt a bit too risky for my taste.
Fast application servers are supposed to be written in Go nowadays. I have never used the language and learning it left me a bit cold. A solid solution, but not for me.
I wrote some code using the Poco C++ networking library17. Poco is a great library. Good performance and good documentation. But writing a web server in C++ is just not something a lot of people out there are doing. It was a bit of a struggle putting all the pieces together.
I was also not convinced I needed a compiled language in the server. This was a data driven application, not one driven by sheer number crunching. I reconsidered my choices.
Server-side JavaScript18 was looking good. Using the same programming language in the web browser and in the web server is incredibly powerful. I tried out Svelte19. Everybody should try out Svelte. It is a breath of fresh air. But it became a real challenge to find all the pieces I needed in its ecosystem. Reluctantly, I had to let go of it.
Implementation
My choice of desktop operating system has been Linux for a couple of decades now. Lately Arch20, but I’ve used many distributions. I settled on Ubuntu21 LTS for the servers because of its wide support. I tried using Nix as a way of having newer packages, but found it a bit too fragile. So plain Ubuntu with a couple of PPA22 (Personal Package Archives) for fresher key packages was my final solution. It is working well.
I have zero artistic ability. I was so impressed by TailwindCSS23, I bought a license early in the project. I was eventually seduced by the React24 with NextJs25 juggernaut. The React documentation is fantastic, by the way. I had to learn D326 to create my icicle graphs, so I also use it for my line graphs. Tabulator for the table component was an easy decision.
Similarly to TailwindCSS, I invested in a JetBrains27 All Products Pack. I am heavily using IntelliJ IDEA, CLion, PyCharm, and DataGrip. One of my better decisions. Getting your code compiled in the background as you type is incredibly powerful.
I am using MDX28 to format the documentation and the blog posts. Yes, I ended up implementing my own simple blogging platform. The documentation is about 50 printed pages long right now, which feels right. The system is designed to be self-documenting and most of the docs is redundant.
After trying a handful of alternatives, I settled on MongoDb29. I miss the power of SQL30, but so far it has proven to be a fantastic choice. Extensive documentation, great community, and all the features I am likely to need in the future. I have not had a single issue with it.
I have used Haproxy31 before and can recommend it without hesitation. Hetzner32 for server hosting and Bunny CDN33 to speed up the web page serving. ClickSend34 for email and text message processing. Lemon Squeezy35 for payments processing. They take care of all tax issues, which is well-worth it when the audience is global.
Hardware Architecture
The hardware architecture is two hefty dedicated servers and a small cloud server, all spread across three data centers. Both dedicated servers run Haproxy, NodeJs36, and MongoDb. Using MongoDb replication, one dedicated server is the primary, handling write operations. The other handles the read operations.
The small cloud server is the real star of the show. It monitors the other servers, logging their performance data in a local database. When it detects a problem, it switches the floating IP from one dedicated server to the other. It sends me a text message when that happens. It can also manually switch the floating IP for maintenance. And it gives me the real-time cluster status at a glance.
This is such a beautiful and effective architecture, I am very surprised I have never seen it documented. I call it the “two and a half men” architecture. The dedicated servers could also be cloud servers. My database schema is sharding ready, so growing the number of data servers will be easy when the time comes to make the switch.
The arbiter program uses Python37 and SQLite38. I wrote it because I could not find anything like it. There have to be others out there with a similar need? If you would find this useful, please let me know. I could turn it into a decent open source project with a couple of months of work. Famous last words...
Performance
Conventional wisdom says you should not worry about performance until you know for sure you have a performance problem. I knew this was an ambitious project before coding started. I knew getting the performance I wanted would be a challenge. This was a customer first and performance first design. No apologies.
To push the system to its limits, I created test programs that ran for hours and generated hundreds of thousands of server events. Let me tell you, it is a humbling experience to have a stress test uncover dark corners of a design you forgot about or simply did not anticipate. I am talking orders of magnitude difference between the untuned system and where it is now.
Summary
The world has changed since I started this project a year and a half ago. Word on the street is now that we may completely bypass going into a recession. Large language models are infusing new life into the software industry at many companies. From the emails I am getting with job openings, even the software job market appears to be bouncing back. Life is good.
I am extremely proud of how Giopler turned out. I am amazed how little the final system differs from the initial design, even as the implementation took many twists and turns. Even better, I am still not aware of anything else out there that comes close to the feature set.
So that is my story of how I turned a bunch of lemons, combined them with icicles, and created my own recipe for icicle lemonade. It turned out mighty tasty. I hope you enjoy using it as much as I did creating it.
Disclosure: This post may contain affiliate links. If you use these links to make a purchase, we may earn a commission at no extra cost to you. This helps support our work but does not influence what we write about or the price you pay. Our editorial content is based on thorough research and guidance from our expert team.
About Giopler
Giopler is a fresh approach for writing great computer programs. Use our header-only C++20 library to easily add annotations to your code. Then run our beautiful reports against your fully-indexed data. The API supports profiling, debugging, and tracing. We even support performance monitoring counters (PMCs) without ever requiring administrative access to your computer.
Compile your program, switching build modes depending on your current goal. The annotations go away with zero or near zero runtime overhead when not needed. Our website has charts, tables, histograms, and flame/icicle performance graphs. We support C++ and Linux today, others in the future.
Related Posts
Leaving beta, entering General Availability