Jump to content


  • Content Count

  • Joined

  • Last visited

Everything posted by MehdiT

  1. Terminating the threads after they finish their execution didn't help. I went back to the part in my code where I spawn dynamic threads and rewrited it. No processes are dynamically created in my code and problem seems to be solved. From this, I get that it is perhaps not a good idea to spawn dynamic threads in large SystemC simulations. I also noticed that the usage of the RAM grows exponentially when I use dynamic threads (with and without manual termination). It could be because I haven't done it in a correct way though (see code above). Without dynamic threads, the usage of the RAM is
  2. Hi Alan, There are many reasons why I can't just migrate to sc_methods. Among which the use of wait statements. Killing the spawned threads sounds reasonable although I thought this should be taken care automatically. Do you mean I should use sc_process_handle, wait for the thread to be terminated and then kill it with kill() method in the handle?
  3. Thanks Alan. In fact all my modules are instantiated with "new" and I do allocate memory dynamically everywhere to avoid using large arrays. I investigated the problem more with gdb to find out that it may be related to the fact that I spawn a lot of dynamic threads. Here is the output of (gdb) backtrace noc_exe: ../../../../src/sysc/kernel/sc_cor_qt.cpp:107: virtual void sc_core::sc_cor_qt::stack_protect(bool): Assertion `ret == 0' failed. [Thread debugging using libthread_db enabled] Program received signal SIGABRT, Aborted. 0x00000038b5e328a5 in raise () from /lib64/libc.so.6 Missing
  4. My top-level is a NoC consists of a Network-On-Chip with a grid of 15*15 nodes (Router+PE). Been trying to simulate it in different machines/configurations but kept stopping at different times of the simulation. Can't see what causes the problem and hence I am stack. 1) Running only SystemC / C++ executable (output of the compiler) in a LSF cluster. Simulation runs normally with expected output then it stops at the 55000ish cycle (cycle accurate model) with this error message: noc_exe: ../../../../src/sysc/kernel/sc_cor_qt.cpp:107: virtual void sc_core::sc_cor_qt::stack_protect(bool): Asse
  5. The time at which a packet is generated is stored in the header of the packet. The header allocates only 32 bits for this timestamp. For any variable of type sc_time T_start, T_start.to_default_time_unit() will return the number of clock cycles if default time unit is configured to (CLK_PERIOD, SC_NS). The question here is how to convert sc_time values to 32 bits unsigned integer without large loss of precision. At the producer side this piece of code runs each time is generated: sc_time t_start = sc_time_stamp(); double magnitude = t_start.to_default_time_unit(); // number of cloc
  • Create New...