I'm working on a small function, that gives my users a picture of how occupied the CPU is.
I'm using cat /proc/loadavg
, which returns the well known 3 numbers.
My problem is that the CPU doesn't do anything, right now, while I'm developing.
Is there a good way to generate some load on the CPU, I was thinking something like makecpudosomething 30
, for a load of 0.3 or similar. Does an application like this exist?
Also, are there any way to eat up RAM in a controlled fashion?
while true;
do openssl speed;
done
also the stress program will let you load the cpu/mem/disk to the levels you want to simulate:
stress is a deliberately simple workload generator for POSIX systems. It imposes a configurable amount of CPU, memory, I/O, and disk stress on the system. It is written in C, and is free software licensed under the GPLv2.
to maintain a particular level of cpu utilization, say 30%, try cpulimit:
it will adapt to the current system environment and adjust for any other activity on the system.
there's also a patch to the kernel for native cpu rate limits here: http://lwn.net/Articles/185489/
I didn't understand very well if you want to generate arbitrary CPU load or CPU utilization. Yes, they are different things indeed. I'll try to cover both problems.
First of all: load is the average number of processes in the running, runnable or waiting for CPU scheduler queues in a given amount of time, "the one that wants your CPU" so to speak.
So, if you want to generate arbitrary load (say 0.3) you have to run a process for 30% of the time and then remove it from the run queue for 70% of the time, moving it to the sleeping queue or killing it, for example.
You can try this script to do that:
export LOAD=0.3
while true
do yes > /dev/null &
sleep $LOAD
killall yes
sleep `echo "1 - $LOAD" | bc`
done
Note that you have to wait some time (1, 10 and 15 minutes) to get the respective numbers to come up, and it will be influenced by other processes in your system. The more busy your system is the more this numbers will float. The last number (15 minutes interval) tends to be the most accurate.
CPU usage is, instead, the amount of time for which CPU was used for processing instructions of a computer program.
So, if you want to generate arbitrary CPU usage (say 30%) you have to run a process that is CPU bound 30% of the time and sleeps 70% of it.
I wrote an example to show you that:
#include <stdlib.h>
#include <unistd.h>
#include <err.h>
#include <math.h>
#include <sys/time.h>
#include <stdarg.h>
#include <sys/wait.h>
#define CPUUSAGE 0.3 /* set it to a 0 < float < 1 */
#define PROCESSES 1 /* number of child worker processes */
#define CYCLETIME 50000 /* total cycle interval in microseconds */
#define WORKTIME (CYCLETIME * CPUUSAGE)
#define SLEEPTIME (CYCLETIME - WORKTIME)
/* returns t1-t2 in microseconds */
static inline long timediff(const struct timeval *t1, const struct timeval *t2)
{
return (t1->tv_sec - t2->tv_sec) * 1000000 + (t1->tv_usec - t2->tv_usec);
}
static inline void gettime (struct timeval *t)
{
if (gettimeofday(t, NULL) < 0)
{
err(1, "failed to acquire time");
}
}
int hogcpu (void)
{
struct timeval tWorkStart, tWorkCur, tSleepStart, tSleepStop;
long usSleep, usWork, usWorkDelay = 0, usSleepDelay = 0;
do
{
usWork = WORKTIME - usWorkDelay;
gettime (&tWorkStart);
do
{
sqrt (rand ());
gettime (&tWorkCur);
}
while ((usWorkDelay = (timediff (&tWorkCur, &tWorkStart) - usWork)) < 0);
if (usSleepDelay <= SLEEPTIME)
usSleep = SLEEPTIME - usSleepDelay;
else
usSleep = SLEEPTIME;
gettime (&tSleepStart);
usleep (usSleep);
gettime (&tSleepStop);
usSleepDelay = timediff (&tSleepStop, &tSleepStart) - usSleep;
}
while (1);
return 0;
}
int main (int argc, char const *argv[])
{
pid_t pid;
int i;
for (i = 0; i < PROCESSES; i++)
{
switch (pid = fork ())
{
case 0:
_exit (hogcpu ());
case -1:
err (1, "fork failed");
break;
default:
warnx ("worker [%d] forked", pid);
}
}
wait(NULL);
return 0;
}
If you want to eat up a fixed amount of RAM you can use the program in the cgkanchi's answer.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With