I wrote a little watch script to watch the load avarage on a Linux server (just for learning purposes and because watch uptime isn't giving the output I want).
It works great, but to have the load avarage for each core I devide the values of the load avarage by the amount of cores. This is now done by hardcoding the cores, but that sucks if I want to use it on a different server.
Due to that i want to have the cores done by variable like nproc.
All nice and all, but for some reason when i use nproc as the core count, my output is totally different then when i hard code the cores...
This is my script now:
PATH=/usr/gnu/bin:/usr/bin:/usr/sbin:/sbin:/opt/csw/bin:/usr/local/bin:/bin
while true
do
uptime | awk '{ printf "%2.2f ",$(NF-2)/4 ; printf "%2.2f ",$(NF-1)/4 ; printf "%2.2f\n",$(NF)/4}'
sleep 1
done
as you see I have divided the load avarage by 4 (4 core server)
This gives me the following output:
$ ./watch-load.sh
1.05 0.96 1.09
1.05 0.96 1.09
0.96 0.94 1.08
0.96 0.94 1.08
0.96 0.94 1.08
but when i edit the script to use nproc it looks something like this:
#! /bin/bash
PATH=/usr/gnu/bin:/usr/bin:/usr/sbin:/sbin:/opt/csw/bin:/usr/local/bin:/bin
while true
do
uptime | awk '{ printf "%.2f ",$10/$(nproc) ; printf "%2.2f ",$11/$(nproc) ; printf "%2.2f\n",$12/$(nproc)}'
sleep 1
done
This will give me the following results:
$ ./watch-load-test.sh
0.22 0.23 0.27
0.22 0.23 0.27
0.22 0.23 0.27
0.28 0.24 0.27
0.28 0.24 0.27
This is weird because nproc is showing 4 cores:
$ nproc
4
I'm at a loss here... Any ideas on why it just doesn't work like it should?