Proc32 runs READY after a process allocates a large amount o

bridged with qdn.public.qnx4
Post Reply
Nina Ksyunz

Proc32 runs READY after a process allocates a large amount o

Post by Nina Ksyunz » Wed Dec 03, 2003 8:59 pm

Hello,

I appear to have found a strange operation of Proc32 in QNX4.23 and QNX4.25
on a 100MHz 486. It appears that if a task allocates a large chunk of
memory (>16MB), when the task exits, Proc32 runs READY over a second,
blocking all other tasks from running.

I have included code below for a simple task and analyzed it using the
Deja-View tools. Deja-View showed that the very last command the exiting
task sent to Proc32 (SYS_DEATH, no response expected) caused Proc32 to stay
active for over a second and block the rest of the tasks from running. The
time that Proc32 stays READY depends on the size of the allocated memory,
the more memory allocated, the longer Proc32 runs READY when the task exits.

I have tried a number of things to reduce the delay, but none of them seem
to help. I have tried:
- allocating the memory in many small chunks versus one
big chunk
- allocating/deallocating the memory using both
new()/delete() and malloc()/free()
- allocating the memory as shared memory instead of
regular memory
- running it on a few different computers

What is Proc32 doing to keep it running READY for so long? Is there any way
to change this behaviour of it? Does the code cause the same results in
your system?

Thanks in advance,
Nina

Here is the example code that causes Proc32 to run ready for over a second:

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <ctype.h>
#include <sys/wait.h>

#define PAGE_SIZE 0x6000
#define PAGES 385

void main (int argc, void **argv ) {
pid_t child, wpid;
int status;
char *mem;

if ( (child = fork()) == -1 ) {
printf("Unable to fork a child\n");
exit(-1);
}
if ( child == 0 ) {
// This is a child
mem = (char*)malloc( PAGES * PAGE_SIZE * sizeof(short) );
if ( mem == NULL) {
printf("Unable to allocate memory\n");
exit(-1);
}

printf("Memory has been allocated\n");
sleep(1);
free(mem);
exit(0); // This exit blocks entire system for over a second due to Proc32
staying READY.
// HW interrupts are enabled and processes during this time.
}
else {
// This is a parent
do {
wpid = waitpid( child, &status, 0 );
} while ( WIFEXITED(status) == 0 );
exit( WIFEXITED(status) );
}
}

Adam Mallory

Re: Proc32 runs READY after a process allocates a large amou

Post by Adam Mallory » Wed Dec 03, 2003 10:56 pm

When freeing memory back to the system, it must all be zeroed, rather than
zeroing it all at allocation time. In addition there are additional checks
done to see if semaphores etc are in those areas and need to be released.

One way around the issue is to allocate the memory into a shared object, and
just map it in - destroy the object when you're ready. Or if you have some
tasks you must have some time to run, drop the priority of Proc, and put
them higher than Proc.

-Adam

Nina Ksyunz <nksyunz@nxtphase.com> wrote in message
news:bqlhmr$fv1$1@inn.qnx.com...
Hello,

I appear to have found a strange operation of Proc32 in QNX4.23 and
QNX4.25
on a 100MHz 486. It appears that if a task allocates a large chunk of
memory (>16MB), when the task exits, Proc32 runs READY over a second,
blocking all other tasks from running.

I have included code below for a simple task and analyzed it using the
Deja-View tools. Deja-View showed that the very last command the exiting
task sent to Proc32 (SYS_DEATH, no response expected) caused Proc32 to
stay
active for over a second and block the rest of the tasks from running. The
time that Proc32 stays READY depends on the size of the allocated memory,
the more memory allocated, the longer Proc32 runs READY when the task
exits.

I have tried a number of things to reduce the delay, but none of them seem
to help. I have tried:
- allocating the memory in many small chunks versus
one
big chunk
- allocating/deallocating the memory using both
new()/delete() and malloc()/free()
- allocating the memory as shared memory instead of
regular memory
- running it on a few different computers

What is Proc32 doing to keep it running READY for so long? Is there any
way
to change this behaviour of it? Does the code cause the same results in
your system?

Thanks in advance,
Nina

Here is the example code that causes Proc32 to run ready for over a
second:

#include <stdio.h
#include <stdlib.h
#include <unistd.h
#include <ctype.h
#include <sys/wait.h

#define PAGE_SIZE 0x6000
#define PAGES 385

void main (int argc, void **argv ) {
pid_t child, wpid;
int status;
char *mem;

if ( (child = fork()) == -1 ) {
printf("Unable to fork a child\n");
exit(-1);
}
if ( child == 0 ) {
// This is a child
mem = (char*)malloc( PAGES * PAGE_SIZE * sizeof(short) );
if ( mem == NULL) {
printf("Unable to allocate memory\n");
exit(-1);
}

printf("Memory has been allocated\n");
sleep(1);
free(mem);
exit(0); // This exit blocks entire system for over a second due to
Proc32
staying READY.
// HW interrupts are enabled and processes during this time.
}
else {
// This is a parent
do {
wpid = waitpid( child, &status, 0 );
} while ( WIFEXITED(status) == 0 );
exit( WIFEXITED(status) );
}
}



Davide

Re: Proc32 runs READY after a process allocates a large amou

Post by Davide » Thu Dec 04, 2003 10:13 am

Nina Ksyunz wrote:
Hello,

I appear to have found a strange operation of Proc32 in QNX4.23 and QNX4.25
on a 100MHz 486. It appears that if a task allocates a large chunk of
memory (>16MB), when the task exits, Proc32 runs READY over a second,
blocking all other tasks from running.
Take a look at the old thread in this newsgroup "Latency problem at any
process termination".

Davide

--
/*------------------------------------------------------------*
* Davide Ancri - Prisma Engineering
* email = davidea AT prisma DASH eng DOT it
*------------------------------------------------------------*/

Post Reply

Return to “qdn.public.qnx4”