Memory leak in C++ : vector functions in loops -


i have situation multiple functions called repeatedly. here bare model of program, how supposed work. when number of iterations large program eats memory (not significant in bare minimum model), case of memory leak. please suggest best way in such situations. novice. in advance.

#include <iostream> #include <vector>  /* functions called repeatedly (can in different .cpp files) */ std::vector<int> func1(std::vector<int> ttt); std::vector<int> func2(std::vector<int> ttx);  int main() {      std::vector<int> temp1;     std::vector<int> temp2;      (int jj = 1; jj <= 50; ++jj)     {         std::vector<int>vect0;          vect0.push_back(0);          (int = 1; <= 500; ++i)         {             vect0.push_back(rand()%100);         }          temp1 = func1(vect0);          // other oprations on temp1         temp2 = func2(temp1);          temp1 = temp2;          // other operations of calling similar functions     }     return 0; } // --------------------------------------------- // functions: //func1 std::vector<int>func1(std::vector<int> ttt) {     std::vector<int>tt2(ttt.size(), 0);      (unsigned int ii = 1; ii < ttt.size(); ++ii)     {         tt2[ii] = ttt[ii] - rand() % 100;     }      std::vector<int>tt3 = func2(tt2);      return tt3; }  //func2 std::vector<int>func2(std::vector<int> ttx) {     std::vector<int>txx(ttx.size(), 0);      (unsigned int ii = 1; ii < ttx.size(); ++ii)     {         txx[ii] = ttx[ii] % ii;     }      return txx; } 

pass std::vectors functions reference , modify them in-place. rid of copying, seems pointless since don't seem use originals.

on side note :

for (int = 1; <= (rand() + 1); ++i) 

this generate random number each iteration, not want (beside being pointlessly unoptimized, thrash probability distribution). should generate , store max index, use in condition :

for (int = 1, max = rand() + 1; <= max; ++i) 

edit: probability skew (yeah, it's not relevant, itches me now) :

let n rand_max + 1.

the expected value of max n/2. pretty straightforward, since it's uniform distribution.

now case repeated rand()s. let s(n) probability stop @ iteration n (given reached already). formula in condition gives p(sn) = n/n.

let x number of last iteration. probability reach iteration n ? well, it's probability continue n-1 times, , stop 1 time. since random generations considered independent, take product of all.

p(x=n) = pi[i=0 n-1](1-p(sn)) * p(sn) = pi[i=0 n-1](1-i/n) * n/n

now expected value of this... thing ? weighted mean of values probability, let's crack on.

e(x) = sigma[j=0 n](j * p(x=j)) = sigma[j=0 n](j * pi[i=0 j-1](1-i/n) * j/n)

and i'll stop there because math days long gone. if have scientific calculator @ hand, you'll have result , able see direction distribution skews in. or failed completely.

edit 2 : yeah, calculator. or wolfram alpha. site rocks. anyway, tested n = 500, , expected value drops 250 around 25. so, yeah.


Comments

Popular posts from this blog

javascript - Jquery show_hide, what to add in order to make the page scroll to the bottom of the hidden field once button is clicked -

javascript - Highcharts multi-color line -

javascript - Enter key does not work in search box -