CoinGecko’s engineering team successfully optimized a 1TB+ PostgreSQL table storing hourly crypto price data by implementing table partitioning. The initiative delivered dramatic performance improvements: 86% reduction in p99 response time (from 4.13s to 578ms) and 20% IOPS reduction.
Posts for: #Performance
Caching 101
Caching is something we use to reduce the access to the original source. To do that, we store the data temporarily on another medium. We can use various tech to achieve that, but whatever we choose should be faster or lighter than the original source. This will help our users to retrieve our data faster than before. It will also help lower the initial resource utilization. Depending on what we choose, we can save some costs.
Tiny Guide to Webscaling
Someone on Twitter asked what if Khairul Aming wanted to set up his own website for his sambal? For those who may not know, his product has gained fame and typically sells out quickly once he opens orders. At present, he utilizes Shopee.
From a business standpoint, it’s advisable for him to stay with Shopee. My post is primarily for educational purposes.
Disclaimer: I am not an SRE/DevOps professional, but rather someone eager to share insights that might broaden understanding of web scalability, drawn from my limited experiences. Therefore, there may be inaccuracies, though I hope none too significant.
Minimizing Problems Before and When They Occur
Although we use monolithic architecture, we actually “break down” the application into several parts in production, such as:
- Web
- Free API
- Paid API
- Mobile API
Workers (background jobs)
Therefore, when we deploy, it does not mean that all servers will receive the new application. If we deploy to the Web and there are issues, only the Web will be affected. However, some applications may still have problems if they manipulate shared data until it becomes corrupted.