AlexSax Posted December 3, 2014 Report Share Posted December 3, 2014 Hi all, I am using an LSF module consisting of a gain and integrator blocks only for calculating input and output energy of a simple ELN module. When I try to simulate a long period of time I notice that the physical memory used by the process continues to increase up to the point that the simulation is stopped due to the following exception: Error: (E549) uncaught exception: std::bad_alloc In file: ../../../../src/sysc/kernel/sc_except.cpp:98 In process: sca_implementation_0.cluster_process_0 That is probably due to the fact that there is no more free physical memory on my machine. I noticed that commenting the LSF module the simulation goes on till the end. Why LSF uses so much memory, more than ELN for example? I used LSF only because I needed the integrator block to integrate the power over time, so can I use the other MoCs to perform the same computation? Maybe by using TDF and some numerical integration method? Thanks much! Alessandro Quote Link to comment Share on other sites More sharing options...
maehne Posted December 3, 2014 Report Share Posted December 3, 2014 Hello Alessandro, I'm not able to comment on the memory consumption of LSF models during simulation. However, SystemC-AMS shouldn't consume all the physical memory (i.e. RAM + swap) available -- even when simulating for a long period. What is a common problem is the size of tabular trace files, which can consume a lot of disk space quickly when simulating over long times with high resolution. The reason is that the tabular trace file format does not use any kind of compression. To reproduce your problem, could you provide a minimal executable example, which exhibits the problem in your LSF models? Regarding your application: for integration, you are not obliged to use LSF. You can also use a Laplace Transfer Function (LTF) embedded into a TDF module: H(s) = 1/s (cf. to the SystemC AMS 1.0 User's Guide). You could also do the integration by hand using, e.g., the rectangular or trapezoidal rule embedded into a TDF module. Quote Link to comment Share on other sites More sharing options...
AlexSax Posted December 4, 2014 Author Report Share Posted December 4, 2014 I'm sorry Torsten, I forgot to say that I am simulating a DC-DC converter model consisting of a ELN sub-module and a TDF sub-module. Then there is the LSF module for the calculation of input energy and output energy which consists only of a couple of gain and integrator blocks. Actually I saw that the problem is with the simulation timestep. I was using a timestep of 10ns, which let me simulate around 0.4s of operating time before throwing the std::bad_alloc exception. Then I used a timestep of 100ns, and the simulated time raised to around 4s, and finally with a timestep of 1us I was able to simulate 10s, which was my target. I tried also without tracing any signal, but this had no effect. The weird thing for me was that when I tried to simulate only the converter model without the LSF module, I was able to simulate 10s of operation with timestep of 10ns and no exception was thrown. Does the LSF solver save in memory all or most of the samples of the simulation? Thank you for the suggestion about the integration, I completely forgot about the LTF in TDF, which probably is a more efficient approach. Quote Link to comment Share on other sites More sharing options...
maehne Posted December 5, 2014 Report Share Posted December 5, 2014 Unfortunately, this information is not sufficient to diagnose a potential memory leak in the LSF MoC of Fraunhofer SystemC-AMS. We need a compilable minimal test case for that. Also, can you report the development platform, which you are using (OS Version, processor architecture (i686, x86_64, ...), compiler version, configure parameters used for compiling SystemC and SystemC-AMS, etc. It may be also helpful to actually monitor the memory consumption of the process (e.g., using top on Linux). You should also check from where the bad_alloc exception is thrown and then to examine the call stack (bt command in gdb). Assuming that your model is not doing any dynamic memory allocation after your model has been instantiated and the elaboration phase has finished, you should be able to localise the part of SystemC-AMS, which allocates memory. With some luck, it will be a stable location across different runs. By trying to reproduce the same error on different platforms, you may be able to further localise the source of the problem. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.