Hariharan Bhagavatheeswara Posted July 3, 2019 Report Share Posted July 3, 2019 Hi, I am trying to model large memories (>8 GB) on a virtual platform that I am working on. I don't think using the C++ 'new' operator to allocate the entire chunk is a good idea. Can someone suggest any methods they think or have used in the past to model such? My memory is going to be very sparse to start with and might start filling up only at a later time. Thanks, Quote Link to comment Share on other sites More sharing options...
swami060 Posted July 3, 2019 Report Share Posted July 3, 2019 You may try modeling a 'paged' memory, with allocate-on-write semantics. Quote Link to comment Share on other sites More sharing options...
Roman Popov Posted July 3, 2019 Report Share Posted July 3, 2019 You can start with std::unordered_map<address_t, data_t> and later switch to "paged" solution as swami suggested if performance of unordered_map is not sufficient for your case. Quote Link to comment Share on other sites More sharing options...
Eyck Posted July 3, 2019 Report Share Posted July 3, 2019 The is a memory using a sparse array as backing storage and having a tlm2.0 socket which you might want to use. You will find it at https://git.minres.com/SystemC/SystemC-Components/src/branch/master/incl/scc/memory.h. Quote Link to comment Share on other sites More sharing options...
Hariharan Bhagavatheeswara Posted July 3, 2019 Author Report Share Posted July 3, 2019 Swami/Roman, can you point me to some literature on how the paged memories are implemented? I am 'googling' for it as I am curious to know, but it would help if you have some thing that can be of use to me. Eyck, thanks, I will check it out. Quote Link to comment Share on other sites More sharing options...
Eyck Posted July 3, 2019 Report Share Posted July 3, 2019 The sparse array of the memory I mentioned is implemented as paged memory with allocate on write semantics as @swami060. So you might have a look into this as example Quote Link to comment Share on other sites More sharing options...
deeku Posted July 4, 2019 Report Share Posted July 4, 2019 If very small memory locations are used in entire memory (which is >8GB) you can use on demand paging where the memory is allocated in terms of pages whenever the page is not allocated for requested address. This enables allocate only pages whenever required. You can achieve overall good simulation speed. However in above mechanism if more and more pages allocated, the simulation speed will be downgraded or can crash also because of page allocation failure. In this case, memory can be implemented in file and using IO operations it can be accessed. With this simulation performance will be degraded but ensures simulation will not stop because of page allocation failure. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.