Jump to content

Recommended Posts

Hi,

I am trying to model large memories (>8 GB) on a virtual platform that I am working on. I don't think using the C++ 'new' operator to allocate the entire chunk is a good idea. Can someone suggest any methods they think or have used in the past to model such? My memory is going to be very sparse to start with and might start filling up only at a later time.

Thanks,

Share this post


Link to post
Share on other sites

You can start with std::unordered_map<address_t, data_t>  and later switch to "paged" solution as swami suggested if performance of unordered_map is not sufficient for your case.

Share this post


Link to post
Share on other sites

The sparse array of the memory I mentioned is implemented as paged memory with allocate on write semantics as @swami060. So you might have a look into this as example

Share this post


Link to post
Share on other sites

If very small memory locations are used in entire memory (which is >8GB) you can use on demand paging where the memory is allocated in terms of pages whenever the page is not allocated for requested address. This enables allocate only pages whenever required. You can achieve overall good simulation speed.

However in above mechanism if more and more pages allocated, the simulation speed will be downgraded or can crash also because of page allocation failure.

In this case, memory can be implemented in file and using IO operations it can be accessed. With this simulation performance will be degraded but ensures simulation will not stop because of page allocation failure.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×