maehne Posted May 19, 2021 Report Share Posted May 19, 2021 Thanks @Guillaume Audeon for this additional piece of information! Quote Link to comment Share on other sites More sharing options...
joshwhieb Posted September 6, 2022 Report Share Posted September 6, 2022 Hi @maehne and @Guillaume Audeon, Is Guillaume's patch applied to the github systemC source and master branch or a plan to do so? Also could we also generate a new systemC release 3.3.5 or something like that version wise? @Philipp A Hartmann's fix earlier in this thread was applied to master after the 3.3.4 release. Was thinking we could get a new tag out so that others can avoid this issue. https://github.com/accellera-official/systemc/commits/master I run into a similar problem when running systemC simulations within kubernetes (batch job docker containers). A simulation configuration that has more SC_THREADs runs into the 'ret == 0' sc_cor_qt assert with systemC 2.3.3. The rest of the simulation configurations run fine in kubernetes. I tried compiling and using the latest systemC master and I run into a segmentation fault similar to what Guillaume reported with the faulty configuration. I tried to apply Guillaume's patch to the master branch but it doesn't apply cleanly and i'm still running into runtime errors. The simulations run fine when not in the kubernetes environment which is odd but could be a runtime issue of the cluster container environment and/or the host OS i'm still trying to figure that out. Thanks for all the work on this thread it's appreciated, Josh Hieb On 3/24/2021 at 5:43 AM, Guillaume Audeon said: Hello, I recently stumbled across the same issue, but in a different manner: I configure, build and install SystemC on RHEL7, then execute on CentOS7. Before applying the above fix, I encountered the same sc_assert() failure; after applying the fix, I got a Segmentation fault. I have found that I can fix this by using mmap()/munmap() to allocate/deallocate memory for the Quick Thread stack. See attached a possible patch for this. Kind regards, Guillaume sc_cor_qt.patch 2.8 kB · 6 downloads Quote Link to comment Share on other sites More sharing options...
joshwhieb Posted October 25, 2022 Report Share Posted October 25, 2022 I took another stab with the mmap/unmap changes and got past the seg fault and it's running successfully. I at least opened a pull request so that it's documented in github. Is there drawbacks and/or performance penalties to using mmap/munmap vs malloc/free? The quickthread code also calls mprotect which isn't supported unless you use mmap. https://github.com/accellera-official/systemc/pull/35 https://stackoverflow.com/questions/48106059/calling-mprotect-on-dynamically-allocated-memory-results-in-error-with-error-cod Quote TL;DR version: Use mmap() with MAP_ANONYMOUS to allocate full pages (multiples of sysconf(_SC_PAGESIZE)), instead of using malloc(). The anonymous pages are freed when the process exits, so if they exist for the duration of the process, you do not need to clean them up. If you want to release the pages, use munmap(). Longer version: Small malloc()s tend to end up in BSS, and they might overlap with pages mapped from the ELF executable (as rw-p in /proc/self/maps), so you cannot reliably use mprotect() on malloc()'d memory. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.