<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Memory on</title><link>https://frn.sh/c/memory/</link><description>Recent content in Memory on</description><generator>Hugo</generator><language>en-US</language><copyright>Copyright © Fernando Simões.</copyright><lastBuildDate>Fri, 03 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://frn.sh/c/memory/index.xml" rel="self" type="application/rss+xml"/><item><title>Reading /proc/pid/smaps</title><link>https://frn.sh/smaps/</link><pubDate>Fri, 03 Apr 2026 00:00:00 +0000</pubDate><guid>https://frn.sh/smaps/</guid><description>A few weeks ago I profiled a Node.js server with recurring memory spikes. I found out dirty pages, allocator behavior and memory that never came back. To build a cleaner mental model, I stripped the problem down to something smaller: python3 -m http.server.
root@debian:/# ps aux | grep http.server | grep -v grep root 479226 0.0 0.1 32104 19324 pts/1 T 16:10 0:00 python3 -m http.server 19 MiB of resident memory.</description></item><item><title>Where did 400 MiB go?</title><link>https://frn.sh/pmem/</link><pubDate>Sat, 21 Mar 2026 00:00:00 +0000</pubDate><guid>https://frn.sh/pmem/</guid><description>I restarted all 60+ pods of a Node.js websocket app earlier today. Every single pod sitting at ~330 MiB of memory. Except one, which was double the rest - at 640 MiB.
This is a statefulset. When I built the cluster, I estimated each pod&amp;rsquo;s footprint: ~198 MiB base, plus ~25 MiB per websocket. With 30 websockets per pod, that&amp;rsquo;s roughly 900 MiB. I was wrong about the per-websocket cost - it&amp;rsquo;s lower than 25 MiB in practice.</description></item><item><title>Ok, we need to copy</title><link>https://frn.sh/cow/</link><pubDate>Sat, 22 Nov 2025 00:00:00 +0000</pubDate><guid>https://frn.sh/cow/</guid><description>Zenduty kept paging me about a Redis container. Memory issues. The log:
WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. I thought I understood this - Redis forks to do background saves, fork copies memory, and if there&amp;rsquo;s not enough memory for the copy it fails. Set overcommit to 1 and move on.
But I was wrong about the mechanism.</description></item></channel></rss>