Search the Community
Showing results for tags 'memory'.
Found 2 results
Hi all, In the context of a very large hardware design simulated in SystemC, we faced some issues with the large footprint of our simulator. After deep and long memory profiling we identified a potential large memory optimization by removing the unique names of each sc_object. To assess the footprint saving coming from this optimization, we tweaked the Accellera kernel implementation to remove unique name generation for objects having an automatically generated name (such as signals) and also we removed the check for name uniqueness. The results were good, in the sense that we were able to save around 17 percent of the total memory footprint for a simulation counting more 3.000.000 of sc_objects. So my questions are the following ones. Why does the SystemC standard requires (page 124) that "Each sc_object shall have a unique hierarchical name reflecting its position in the object hierarchy" ? I guess this is only for debug purposes, right ? Would it be possible to have a runtime option allowing the Accellera kernel implementation to disable this "debugging" feature for in "production" simulators ? Thank you for any answers and comments you may have on this topic, ----- Manu PS: this work has been recently published at under the title "Speed And Accuracy Dilemma In NoC Simulation: What about Memory Impact?" and will be available soon.
Hi all, I am using an LSF module consisting of a gain and integrator blocks only for calculating input and output energy of a simple ELN module. When I try to simulate a long period of time I notice that the physical memory used by the process continues to increase up to the point that the simulation is stopped due to the following exception: Error: (E549) uncaught exception: std::bad_alloc In file: ../../../../src/sysc/kernel/sc_except.cpp:98 In process: sca_implementation_0.cluster_process_0 That is probably due to the fact that there is no more free physical memory on my machine. I noticed that commenting the LSF module the simulation goes on till the end. Why LSF uses so much memory, more than ELN for example? I used LSF only because I needed the integrator block to integrate the power over time, so can I use the other MoCs to perform the same computation? Maybe by using TDF and some numerical integration method? Thanks much! Alessandro