#include <iostream>
#include <string>
#include <deque>
#include <vector>
#include <unistd.h>
using namespace std;
struct Node
{
string str;
vector<string> vec;
Node(){};
~Node(){};
};
int main ()
{
deque<Node> deq;
for(int i = 0; i < 100; ++i)
{
Node tmp;
tmp.vec.resize(100000);
deq.push_back(tmp);
}
while(!deq.empty())
{
deq.pop_front();
}
{
deque<Node>().swap(deq);
}
cout<<"releas\n";
sleep(80000000);
return 0;
}
By top ,I found my program's memory was about 61M, why? And it's ok if there is a copy-constructor in Node .I would like to know why , not how to make it correct.
gcc (GCC) 4.9.1, centos
Best Answer-推荐答案 strong>
Generally, new /delete and malloc /realloc /free arrange for more memory from the OS using sbrk() or OS-specific-equivalent, and divide the pages up however they like to satisfy the program's allocation requests. It's not worth the bother to try to release small pages back to the OS - too much extra overhead tracking the pages that are / are not still part of the pool, rerequesting them etc.. In low memory situations, normal caching mechanisms will allow long-unused memory pages to be swapped out of physical RAM anyway.
FWIW, GNU libC's malloc et al. makes an exception for particularly large requests so they can be fully released for the OS / other programs to use before program termination; quoting from the NOTES section here:
When allocating blocks of memory larger than MMAP_THRESHOLD bytes, the glibc malloc()
implementation allocates the memory as a private anonymous mapping
using mmap (2). MMAP_THRESHOLD is 128 kB by default, but is
adjustable using mallopt (3). Allocations performed using mmap (2) are
unaffected by the RLIMIT_DATA resource limit (see getrlimit (2)).
|