<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Cpu on</title><link>https://frn.sh/c/cpu/</link><description>Recent content in Cpu on</description><generator>Hugo</generator><language>en-US</language><copyright>Copyright © Fernando Simões.</copyright><lastBuildDate>Thu, 11 Dec 2025 00:00:00 +0000</lastBuildDate><atom:link href="https://frn.sh/c/cpu/index.xml" rel="self" type="application/rss+xml"/><item><title>108,725 forks</title><link>https://frn.sh/tforks/</link><pubDate>Thu, 11 Dec 2025 00:00:00 +0000</pubDate><guid>https://frn.sh/tforks/</guid><description>First week at a new job. A colleague was showing me around our Grafana dashboards, just routine monitoring of the baremetal machines. One caught my eye: a machine with 32GB RAM and a top-of-the-line processor was hitting 90% CPU. A few containers running, no alerts, and nobody had reported anything.
I found a process with cmd bash startup.sh that had been running for 28 minutes.
I straced it for a few minutes:</description></item><item><title>Sigterm a D state process</title><link>https://frn.sh/sigterm/</link><pubDate>Sun, 08 Jun 2025 00:00:00 +0000</pubDate><guid>https://frn.sh/sigterm/</guid><description>Load average hit 12 on a 2 vCPU machine during a production incident. My first thought was that CPU must be the bottleneck - 12 is 6x the core count.
But it wasn&amp;rsquo;t.
Linux load average counts three things: processes running on a CPU, processes waiting in the run queue, and processes in uninterruptible sleep - D state. From the kernel source:
The global load average is an exponentially decaying average of nr_running + nr_uninterruptible.</description></item></channel></rss>