I'm getting this error in Valgrind after attempting to free a list. print_list dumps the list to the syslog. I'm pretty confident that output is correct.
Valgrind:
==7028== 1 errors in context 1 of 10:
==7028== Invalid read of size 4
==7028== at 0x8049603: free_list (list.c:239)
==7028== by 0x80488B5: m61_close_for_valgrind (m61.c:36)
==7028== by 0x8048825: main (mytest.c:19)
==7028== Address 0x420006c is 4 bytes inside a block of size 8 free'd
==7028== at 0x4028F0F: free (vg_replace_malloc.c:446)
==7028== by 0x804960C: free_list (list.c:239)
==7028== by 0x80488B5: m61_close_for_valgrind (m61.c:36)
==7028== by 0x8048825: main (mytest.c:19)
==7028==
mytest.c:
15 char *temp = malloc(10);
16 char *temp2 = malloc(10);
17 free(temp);
18 free(temp2);
19 m61_close_for_valgrind();
list.h
typedef struct lnode {
ACTIVE_ALLOCATION *value;
struct lnode *next;
} lnode;
list.c (Called by m61_close_for_valgrind()
void free_list(LIST *s) {
lnode **nptr = &s->head;
print_list(s);
while (*nptr) {
lnode **tmp = nptr;
tmp = nptr;
if ((*tmp)->value) {
syslog(LOG_NOTICE,"Freeing (*tmp)->value=%p\n", (*tmp)->value);
//printf("%p\n",(*nptr)->value);
free((*tmp)->value); //Free active allocation metadata
}
nptr = &(*nptr)->next;
syslog(LOG_NOTICE,"New *nptr value=%p\n", (*nptr));
syslog(LOG_NOTICE,"Freeing (*tmp)=%p\n", (*tmp));
free(*tmp); //Free node
}
}
syslog
Sep 19 00:37:02 appliance mytest[7759]: -- Start List Dump --
Sep 19 00:37:02 appliance mytest[7759]: (*nptr)=0x903f220 (*nptr)->value=0x903f208 (*nptr)->next=0x903f260 (*nptr)->value->ptr=0x903f1f0
Sep 19 00:37:02 appliance mytest[7759]: (*nptr)->value->ptr=0x903f1f0
Sep 19 00:37:02 appliance mytest[7759]: (*nptr)=0x903f260 (*nptr)->value=0x903f248 (*nptr)->next=(nil) (*nptr)->value->ptr=0x903f230
Sep 19 00:37:02 appliance mytest[7759]: (*nptr)->value->ptr=0x903f230
Sep 19 00:37:02 appliance mytest[7759]: -- End List Dump --
Sep 19 00:37:02 appliance mytest[7759]: Freeing (*tmp)->value=0x903f208
Sep 19 00:37:02 appliance mytest[7759]: New *nptr value=0x903f260
Sep 19 00:37:02 appliance mytest[7759]: Freeing (*tmp)=0x903f220
Sep 19 00:37:02 appliance mytest[7759]: Freeing (*tmp)->value=0x903f248
Sep 19 00:37:02 appliance mytest[7759]: New *nptr value=(nil)
Sep 19 00:37:02 appliance mytest[7759]: Freeing (*tmp)=0x903f260
In fact, we allocated a 3 bytes block, then free'd it. “0 bytes inside” means that our pointer points to the very first byte of this block. Valgrind tells us where the error occured, where the block was free'd and also where is was malloc'd.
Error message: Invalid write of size 4, means possible an integer or pointer on 32bits platform was stored in a memory that is not allocated with malloc() or on the stack. This happened at example. c 6th line, that was called from main. c 11th line.
Sometimes, running a program (including with valgrind) can show a double-free error while in reality, it's a memory corruption problem (for example a memory overflow). The best way to check is to apply the advice detailed in the answers : How to track down a double free or corruption error in C++ with gdb.
An Invalid read means that the memory location that the process was trying to read is outside of the memory addresses that are available to the process. size 8 means that the process was trying to read 8 bytes. On 64-bit platforms this could be a pointer, but also for example a long int.
In each iteration other than the first, tmp
points at the next
pointer from the previous node - but you've already freed that node (in the previous iteration), so tmp
points into a freed block and you can't dereference it.
As caf already wrote, you're accessing memory that has just been freed.
To fix that, just don't use double pointers, single pointers will do very well here.
So replace
lnode **nptr = &s->head;
by
lnode *nptr = s->head;
Same for
lnode **tmp = nptr;
in the loop. Make it
lnode *tmp = nptr;
and drop the double assignment just when you at it.
Then access value and next by
tmp->value
and
tmp->next
directly
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With