c++ - Program gets aborted before new throws bad_alloc -


below small c++-program apparently gets aborted in number of cases before "new" throws exception:

int main(){    try{       while(true)          new char[2];    }    catch(...){       while(true);    } } 

the program first compiled mingw/g++ 4.6.1 , executed on 32-bit windows 7 system via shell. no serious other programs (in terms of memory/cpu consumption) running @ time. program terminated before entering catch-block. when compiling , running program under linux (debian 7.3, gcc/c++ 4.7.2, 24gb memory) program behaved similarly. (the reason infinite loop in catch-block avoid there might throw exceptions - particularly i/o.) surprising (to me @ least) happened when launching program twice on windows system: if program launched in 2 different shells (almost) simultaneously, neither of 2 processes terminated before new-exception thrown. unexpected me observation moderate enlargement of size of allocated chunks of memory (by replacing "2" in fourth line "9") made premature termination disappear on windows system. on linux machine more drastic enlargement needed avoid termination: approx. 40,000,000 bytes per block necessary prevent termination.

what missing here? normal/intended behavior of involved operating systems? , if so, doesn't undermine usefulness of exceptions - @ least in case of dynamic allocation failure? can os settings modified somehow (by user) prevent such premature terminations? , finally, regarding "serious" applications: @ point (w.r. dynamic memory allocation) have fear application getting abruptly aborted os?

is normal/intended behavior of involved operating systems?

yes, it's known "overcommit" or "lazy allocation". linux (and think windows, never program os) allocate virtual memory process when request it, won't try allocate physical memory until access it. that's point where, if there no available ram or swap space, program fail. or, in case of linux @ least, other processes might randomly killed can loot memory.

note that, when doing small allocations this, process allocate larger lumps , place them in heap; allocated memory typically accessed immediately. large allocation allocated directly os , test program won't access memory - why observed program didn't abort when allocated large blocks.

and if so, doesn't undermine usefulness of exceptions - @ least in case of dynamic allocation failure?

yes, rather.

can os settings modified somehow (by user) prevent such premature terminations?

on linux, there's system variable control overcommit policy:

echo 2 > /proc/sys/vm/overcommit_memory 

the value 2 means never overcommit - allocations fail if ask more uncommitted ram plus swap. 1 means never fail allocation. 0 (the default) means guess whether allocation request reasonable.

i've no idea whether windows configurable.


Comments

Popular posts from this blog

javascript - Jquery show_hide, what to add in order to make the page scroll to the bottom of the hidden field once button is clicked -

javascript - Highcharts multi-color line -

javascript - Enter key does not work in search box -