Jump to content


  • Posts

  • Joined

  • Last visited

Everything posted by sheridp@umich.edu

  1. Sorry for the delay, I just saw this reply. My use case is in modeling a CPU (at a pretty high, task level) using an SC_THREAD. What the CPU does is cycle through its list of tasks which involves sending and receiving messages to other components (let's call it slaveA & slaveB). However, if the CPU makes it through its whole list of tasks and has accomplished no useful work (e.g. it is polling slaves A & B, finding them not ready), then I need the CPU to sleep until either is ready. This is to help speed up the simulation (each cycle through the tasks involves multiple wait statements for small time increments if there's no useful work to do--I could just accumulate the time in loose fashion, but there are other reasons not to). Rather than having to know apriori about slaveA and slaveB's wakeup events, I have a pointer to slaveA's event returned in the message after a call to b_transport. Originally, I had collected these events in an sc_event_or_list, and if the CPU made i through its list of tasks without accomplishing work, it would wait on them. However, I realized that if, while checking slaveB, slaveA notified its wakeup event, I would miss it and the CPU would wait when it should go back and check slaveA again, and might even deadlock if nothing wakes the CPU up after that. Instead, what I wanted to do was have a method that would set a flag to tell the CPU to keep cycling. If, after polling slaveA and finding it was busy, I would modify the method's sensitivity to include the event returned by slaveA. Because next_trigger is a protected method, however, I cannot access it from withing the sc_thread of the CPU. What I settled on was similar to this, but instead of modifying a single method's sensitivity, I spawn new instances of the flag-setting method immediately after I receive the event pointer (before yielding so that I cannot miss slaveA's notification).
  2. This is exactly what I want to do, but next_trigger(sc_event_or_list&) is protected within sc_method_process.
  3. Is is possible to modify an sc_method_process's dynamic sensitivity from outside the method itself? While I can call next_trigger when the method is running, it would be preferable to add event sensitivity from another method/thread. If not, one work workaround I came up with is the following: sc_core::sc_event_or_list dynamic_sensitivity; sc_core::sc_event sensitivity_change_event; SC_METHOD(Change_Me_Method); sensitive << sensitivity_change_event; SC_THREAD(Changer_Thread); void Change_Me_Method(){ if(!sensitivity_change_event.triggered()){ //Do stuff } next_trigger(dynamic_sensitivity | sensitivity_change_event); } sc_core::sc_event new_event; void Changer_Thread(){ dynamic_sensitivity |= new_event; sensitivity_change_event.notify(); } Wondering if there is a cleaner solution.
  4. Eyck, Not to quibble, but I disagree with that interpretation. You're right I don't have hierarchical binding, and by the first sentence, I should register a transport method. However, the second sentence says that if I don't register one, a runtime error will be generated if and only if the corresponding method is called, which in my code, it isn't. I think this becomes even more problematic if I had configured the socket to be SC_ZERO_OR_MORE_BOUND, in which case it's even more likely that I might not register a transport method if I intend not to connect the socket. I can definitely understand that perhaps my interpretation relies too heavily on the second sentence, but I'd say the spec could probably be made more clear in that regard. Let me know your thoughts.
  5. I ran into an issue using multi_passthrough_target_sockets: It is required that register at least one transport method (b_transport or nb_transport_fw) or you'll get the following error: This is in contrast to the spec section, section l: It's not really a big problem (why would you ever not register at least one?), but it is something I ran into during development, as I was checking my hierarchical binding structure, but had not yet created my transport methods. The error message is particularly confusing, because the target_socket is bound, and the runtime error is thrown after elaboration, an not when the corresponding method is called. The following code will reproduce the error: #include <systemc.h> #include <tlm.h> #include <tlm_utils/multi_passthrough_target_socket.h> #include <tlm_utils/multi_passthrough_initiator_socket.h> class Container : sc_module { public: tlm_utils::multi_passthrough_target_socket<Container, 1, tlm::tlm_base_protocol_types, 0, SC_ZERO_OR_MORE_BOUND> target_socket; tlm_utils::multi_passthrough_initiator_socket<Container, 1, tlm::tlm_base_protocol_types, 0, SC_ZERO_OR_MORE_BOUND> initiator_socket; Container(sc_module_name name) { initiator_socket.bind(target_socket); //If both lines are line is commented, an error is generated // target_socket.register_b_transport(this, &Container::b_transport); // target_socket.register_nb_transport_fw( this, &Container::nb_transport_fw); } void b_transport( int id, tlm::tlm_generic_payload& trans, sc_time& delay ) { } tlm::tlm_sync_enum nb_transport_fw(int id, tlm::tlm_generic_payload& trans, tlm::tlm_phase& phase, sc_time& delay){ } }; int sc_main(int argc, char *argv[]) { Container c("container"); sc_start(); return 0; }
  6. Hi Philipp, Thanks for your response. You are right, I think the spec is clear on that point. I think I had some confusion regarding the default flags on the compiler, but, as you pointed out, that is beyond the scope of the SystemC spec.
  7. The following Dockerfile will reproduce the problem: FROM alpine:3.9 as builder RUN apk add --no-cache build-base linux-headers WORKDIR /opt FROM builder as builder_systemc COPY systemc-2.3.3.gz /opt RUN mkdir /opt/systemc_src && \ tar -xf systemc-2.3.3.gz -C /opt/systemc_src --strip-components=1 && \ cd /opt/systemc_src && ./configure --prefix /opt/systemc-2.3.3 --enable-debug CXXFLAGS="-DSC_CPLUSPLUS=201703L" && \ make -j$(nproc) && \ make install Which results in the following error: CXX kernel/sc_simcontext.lo In file included from kernel/sc_simcontext.cpp:57: ../../src/sysc/utils/sc_string_view.h:62:29: error: 'string_view' in namespace 'std' does not name a type typedef SC_STRING_VIEW_NS_::string_view sc_string_view; ^~~~~~~~~~~ It looks like the following is related: #if SC_CPLUSPLUS >= 201402L && defined(__has_include) # if SC_CPLUSPLUS > 201402L && __has_include(<string_view>) /* since C++17 */ # define SC_STRING_VIEW_NS_ std # include <string_view> /* available in Library Fundamentals, ISO/IEC TS 19568:2015 */ # elif __has_include(<experimental/string_view>) # define SC_STRING_VIEW_NS_ std::experimental # include <experimental/string_view> # endif #else // TODO: other ways to detect availability of std::(experimental::)string_view? #endif I'm guessing that defining SC_CPLUSPLUS did not cause the -std=c++17 flag to be passed to the compiler, because <string_view> contains the following: #if __cplusplus >= 201703L Update: Modifying the ./configure command to: ./configure --prefix /opt/systemc-2.3.3 --enable-debug CXXFLAGS="-DSC_CPLUSPLUS=201703L -std=c++17" solves the problem. It might be worthwhile mentioning this in the INSTALL notes.
  8. Hi Mathieu, I know this post is over 5 years old, but did you ever find something like this?
  9. Hook, Yes, this is correct. The initiator calls nb_transport_fw in the target with a delay of 10ns timing annotation. In order to respect the timing annotation, the target placed the transaction into a payload_event_queue (PEQ) and it will emerge from the PEQ after 10ns (this is done by calling peq_with_get.notify(trans, delay) where the delay argument is what was passed into nb_transport_fw). Similarly at time = 125ns, the target calls nb_transport_bw to the initiator with a delay of 10ns. The initiator does the same thing--puts the transaction into its own PEQ with a delay of 10ns.
  10. Is it just that it is inappropriate to try to model at this high level of abstraction--like locking a bus without an explicit arbiter--using AT?
  11. Basically, what I'm interested in is a multi-producer, multi-consumer model over a shared bus. When not using TLM, I would implement it as follows: class top : sc_module { sc_mutex bus; producer p1("p1", bus); producer p2("p2", bus); consumer c1("c1"); consumer c2("c2"); p1.port(c1.port); p2.port(c2.port); }; class producer : sc_module { public: sc_mutex& bus; sc_fifo_out<transaction_object*> port; producer(const sc_core::sc_module_name &name_, sc_mutex& bus_) : bus(bus_){ SC_THREAD(thread); } void thread(){ while(true){ wait(production_time, SC_US); auto trans* = new transaction_object(); bus.lock(); //ensure exclusive access to the bus wait(transmission_time, SC_SEC); port.write(trans); //potentially block if the consumer thread is full; if we don't want to block access to the bus the whole time, we could wait on free space in the consumer fifo before locking the bus. bus.unlock(); } } } class consumer : sc_module { public: sc_core::sc_export<sc_fifo<transaction_object*>> port; sc_fifo<transaction_object*> fifo; consumer(const sc_core::sc_module_name &name_){ SC_THREAD(thread); port(fifo); } void thread(){ while(true){ auto trans = fifo.read(); wait(consumption_time, SC_SEC); } } } * Please note, I didn't try to compile the above code and I'm sure there are some syntax bugs; in addition to the obvious memory leak. This works great, and I think it's easy to understand, but (1) it doesn't follow TLM2.0 standards and (2) I think it might be significantly slower than TLM2.0 AT style. The problem I'm having is just that TLM2.0 AT style is just so counter-intuitive for me with callbacks, phases, and timing annotations. Further, I agree with the advice given by David Black on this forum that SC_THREADs are much more intuitive to understand than SC_METHODs, but I don't see how to implement AT style in a straightforward manner using SC_THREADs. A good example, is the SimpleBusAT (given in systemc-2.3.2/examples/tlm/common/include/models/SimpleBusAT.h) -- this looks far more complex than using an sc_mutex and sc_fifo, but I'm wondering if there is a simple design pattern for AT to follow that could help ease model development.
  12. If a third party model is designed to take a tlm_generic_payload, and you derive from tlm_generic_payload, then it should handle your transaction object correctly, ignoring any additional fields that may be defined on the object. If a target wants to make use of those additional fields, then it could dynamic_cast the tlm_generic_payload* to your derived type* (using C++ RTTI to ensure that it actually has a derived_type object) . I understand that the standard was instead developed around extensions, but I'm wondering what was the motivation for going in this direction.
  13. Does anyone know of any open source approximately timed TLM2.0 models? While there a few examples included in the SystemC download, I am looking for examples of real hardware models to get a better understanding of how model implementers actually use the base protocol (SC_METHOD vs SC_THREAD; 1, 2 or 4 phase, timing annotation, peq_with_get vs peq_with_cb_and_phase, etc.).
  14. Section 14.2 of the IEEE STD 1666-2011 states I am wondering why the extension mechanism is preferred since you could simple ignore the additional fields provided in a derived class.
  15. Hi Philipp, Sure. Can you point me in the direction of the official SystemC repo (I currently just downloaded the source from http://accellera.org/downloads/standards/systemc). I found the following: https://github.com/systemc/systemc-2.3 , which looks official, but it's not version 2.3.2. On the other hand, I found https://github.com/avidan-efody/systemc-2.3.2, but it looks unofficial. If there is no official repo that includes 2.3.2, I can just create one from the downloaded source. Thanks, Patrick
  16. Actually, even if you're just modeling hardware, you might want to use sc_spawn in the case that you have a run-time configurable number of hardware blocks. SC_THREADS have to be declared at compile time.
  17. Hi Sumit, I think when you are just modeling hardware with fixed modules, there probably isn't a whole lot of uses for sc_spawn (I'm sure others could find some uses). However, in my models, I do a lot of modeling of firmware state machines at a pretty high level of abstraction (ie. I'm writing models before any firmware is available, so the models are not using something like instruction set simulators). In these models, a command is received and then a number of different actions are taken in sequence throughout the lifetime of the command, sometimes waiting on hardware or other threads, etc.). The easiest way to model this is as an sc_thread: void run(Command received_command) { do_x(received_command); wait(10, SC_US); do_y(received_command); mutex.lock(); do_z(received_command); mutex.unlock(); ... } The thing is, at compile time, I have no idea how many commands I'll receive, this depends on run-time inputs to the model. So I typically have a fixed module using SC_THREAD receiving the commands, but then for each command it receives, it kicks off a run using sc_spawn. SC_THREAD(receiver); void receiver() { while(true) { auto command = fifo.read(); sc_spawn(sc_bind(&run, command)); } }
  18. Update: I wrote the following in an attempt to replicate the start_of_simulation sc_spawn segfault and found that it's pretty hard to re-create; the following program appears to do it pretty reliably though: #define SC_INCLUDE_DYNAMIC_PROCESSES #include <systemc.h> #include <iostream> class Test : public sc_core::sc_module { SC_HAS_PROCESS(Test); public: void run(){ std::cout << "Spawned" << std::endl; } void run2(){ for(int i = 0; i < 10; ++i){ wait(10, SC_US); sc_core::sc_spawn(sc_bind(&Test::run, this)); } } void run3(){ for(int i = 0; i < 100; ++i) wait(10, SC_US); } Test(const sc_core::sc_module_name &name_){ SC_THREAD(run2); SC_THREAD(run3); } void start_of_simulation(){ for(int i = 0; i < 10; ++i) sc_core::sc_spawn(sc_bind(&Test::run, this)); } }; int sc_main(int argc, char *argv[]){ Test test("test"); sc_start(); } // g++ -O3 -std=c++14 -I/opt/systemc-2.3.2/include -L/opt/systemc-2.3.2/lib-linux64 -lsystemc test.cpp Also, if I leave everything the same but switch out start_of_simulation for end_of_elaboration, I still get the segfault. The bug depends on the order of additions to the process table (because the static process' destructor will walk the table until it finds a match). If I comment out either run3 or the spawn in run2, I don't see the segfault. Hopefully this is helpful in finding a solution.
  19. Also, one small modification to your suggested code: I noticed that if you wait on an empty sc_event_and_list, it will never progress (which kind of makes sense--with no events, there's nothing to wake the thread). With that in mind, I would change wait(processes_running); to if(processes_running.size()) wait(processes_running); On the other hand, it might be nice for debuggability if waiting on an empty sc_event_and_list didn't wait at all or threw an exception.
  20. Hi Philipp, Thanks for your reply. I like your suggestion of building the sc_event_and_list immediately before waiting on it and checking that processes are still alive while doing so; I'm sure I would have hit hard-to-debug problems if I had waited on processes that had already terminated, even if I kept the handles around. With regards to the second problem, I can confirm I am using SystemC 2.3.2. I have not tried spawning at end_of_elaboration; I've refactored my code so my spawn occurs in another location--neither end_of_elaboration, nor start_of_simulation. If I get a chance, I will run that experiment. In any case, I agree that it seems the current implementation has a bug by not removing the dynamic thread, created in start_of_simulation, from the process table. The logic needed in the destructor to identify these particular dynamic threads might not be worth it since one could simply use the SC_THREAD macro to statically spawn them at the start (potentially with a helper function if dynamic arguments need to be bound). The fix could be as simple as documenting that sc_spawn'ing in start_of_simulation is not allowed.
  21. David, thank you for running those experiments showing a <3% difference. As I model a lot of software state machines, I greatly prefer to use sc_threads, but have always wondered if the context-saving nature of threads makes my programs slower. In profiling, I see that memcpy is called often and seems to be a factor in runtime. With that in mind, is an sc_thread context switch with lots of local variables more expensive than if I instead put those variables as class members so that they are not local to the thread?
  22. It is in my code: void My_Class::start_of_simulation(){ sc_core::sc_spawn(sc_bind(&My_Class::my_method, this)); } I don't see exactly why this should cause problems, but if I do it, then approximately 50% of the time at the end of simulation, I get a segfault with the above traceback. Because the segfault is happening when traversing the sc_process_table::queue, I'm guessing that when spawning a thread at start of simulation (which intentionally finishes, by the way), it gets added to the process_table, but for whatever reason, never removed. The process itself is removed (I think this happens in sc_simcontext::crunch), but because the table isn't updated, the linked-list has an invalid ptr, hence the segfault. Just a guess though. EDIT: Sorry, I think I misunderstood your question, you are asking where the traceback is from? It is triggered at the end of the execution during model cleanup; you can see that it is in the destructor ( sc_core::sc_thread_process::~sc_thread_process ). That's actually the destructor of one of my statically created (SC_THREAD) threads, not the dynamic one. During its destruction, it traverses the sc_process_table::queue and hits a bad pointer when doing sc_core::sc_thread_process::next_exist.
  23. I have the following code: sc_core::sc_event_and_list processes_complete; for(int i = 0; i < 10; ++i){ auto process = sc_core::sc_spawn(sc_bind(&My_Class::my_method, this, args)); processes_complete &= process.terminated_event(); wait(10, SC_US); //Comment me } wait(processes_complete); //Line 58 And I get a Segmentation Fault with the following traceback: #0 0x00007ffff7a1439d in sc_core::sc_event_list::add_dynamic(sc_core::sc_thread_process*) const () from /opt/systemc-2.3.2/lib-linux64//libsystemc-2.3.2.so #1 0x00007ffff7a36cf2 in sc_core::wait(sc_core::sc_event_and_list const&, sc_core::sc_simcontext*) () from /opt/systemc-2.3.2/lib-linux64//libsystemc-2.3.2.so #2 0x00005555555801d2 in sc_core::sc_module::wait (el=..., this=0x7fffffffd8c8) at /opt/systemc-2.3.2/include/sysc/kernel/sc_module.h:189 #3 My_Class::my_other_method (this=0x7fffffffd8c8) at source/my_class.cpp:58 #4 0x00007ffff7a31caf in sc_core::sc_thread_cor_fn(void*) () from /opt/systemc-2.3.2/lib-linux64//libsystemc-2.3.2.so #5 0x00007ffff7b135df in qt_blocki () from /opt/systemc-2.3.2/lib-linux64//libsystemc-2.3.2.so My understanding is that the sc_process_handle should persist even if the sc_thread_process itself is destroyed, and the terminated_event is a member of the sc_process_handle, but looking at the traceback, it looks like the sc_event_and_list is doing something with the sc_thread_process, which probably doesn't exists anymore. Any suggestions; am I doing something wrong here? Do I need to keep the sc_process_handle in scope in order to prevent its terminated_event from being destroyed? EDIT: It appears keeping the sc_process_handle in scope is necessary: If I add the sc_process_handles to a list, ie sc_core::sc_event_and_list processes_complete; std::list<sc_process_handle> processes; for(int i = 0; i < 10; ++i){ auto process = sc_core::sc_spawn(sc_bind(&My_Class::my_method, this, args)); processes.push_back(process); processes_complete &= process.terminated_event(); wait(10, SC_US); //Comment me } wait(processes_complete); //Line 58 Then the sefault disappears. Interestingly, if I remove the wait(10, SC_US) from the for loop, it also disappears, but that may just be luck as to when the handle is destroyed. On that note, do I need to keep handles on spawned processes if I just want them to run and be destroyed when the finish? ie, I have a lot of code that looks like For which I don't assign the return value of sc_spawn. EDIT2: I am sometimes getting a segfault a the end of my simulation with the following traceback: #0 0x00007ffff7a4d724 in sc_core::sc_thread_process::next_exist (this=0x555500000000) at ../../src/sysc/kernel/sc_thread_process.h:426 #1 0x00007ffff7a4f0e3 in sc_core::sc_process_table::queue<sc_core::sc_thread_process*>::remove (this=0x5555557d5fe8, handle=0x555555933400)at kernel/sc_simcontext.cpp:184 #2 0x00007ffff7a4e32b in sc_core::sc_process_table::remove (this=0x5555557d5fe0, handle=0x555555933400) at kernel/sc_simcontext.cpp:141 #3 0x00007ffff7a49067 in sc_core::sc_simcontext::remove_process (this=0x7ffff6ff2900, handle=0x555555933400) at kernel/sc_simcontext.cpp:200 #4 0x00007ffff7a552c7 in sc_core::sc_thread_process::~sc_thread_process (this=0x555555933400, __in_chrg=<optimized out>) at kernel/sc_thread_process.cpp:484 #5 0x00007ffff7a55300 in sc_core::sc_thread_process::~sc_thread_process (this=0x555555933400, __in_chrg=<optimized out>) at kernel/sc_thread_process.cpp:486 #6 0x00007ffff7a42fb0 in sc_core::sc_process_b::delete_process (this=0x555555933400) at kernel/sc_process.cpp:177 #7 0x00007ffff7a2ecfd in sc_core::sc_process_b::reference_decrement (this=0x555555933400) at ../../src/sysc/kernel/sc_process.h:611 #8 0x00007ffff7a4ea9c in sc_core::sc_simcontext::crunch (this=0x7ffff6ff2900, once=false) at kernel/sc_simcontext.cpp:525 #9 0x00007ffff7a4a475 in sc_core::sc_simcontext::simulate (this=0x7ffff6ff2900, duration=...) at kernel/sc_simcontext.cpp:889 #10 0x00007ffff7a4c3e0 in sc_core::sc_start (duration=..., p=sc_core::SC_EXIT_ON_STARVATION) at kernel/sc_simcontext.cpp:1710 #11 0x00007ffff7a4c52e in sc_core::sc_start () at kernel/sc_simcontext.cpp:1745 #12 0x00005555555a0164 in sc_main (argc=<optimized out>, argv=<optimized out>) at source/main.cpp:42 #13 0x00007ffff7a3583f in sc_core::sc_elab_and_sim (argc=2, argv=0x7fffffffec98) at kernel/sc_main_main.cpp:87 #14 0x00007ffff7a3565b in main (argc=2, argv=0x7fffffffec98) at kernel/sc_main.cpp:36 #15 0x00007ffff7d8f954 in __libc_start_main () from /lib/ld-musl-x86_64.so.1 #16 0x0000000000000000 in ?? () Using GDB, I see that handle->name() is one of my statically created processes. Not sure why next_exist() is causing a segfault, but I'm beginning to wonder, is this because I'm not keeping handles on my dynamically created processes? EDIT 3: Found, what I think is the cause of this problem -- apparently you shouldn't sc_spawn threads in a module's start_of_simulation() method. It seems this is unrelated to my previous question which still stands: If I don't keep a process_handle on a spawned process, is it allowed to run indefinitely, or is there a change is will be destroyed. Based on my current code, it looks like it gets to run forever (or until completion), but I'd like to confirm that.
  24. Hi Roman, Thank you for your reply. I think I understand the argument for delta cycles as it applies to an sc_signal: the consumer should get the old value of signal in the current delta cycle, even if another process writes to it in the same delta cycle. For sc_fifo, however, the consumer should always be getting the first-in data, regardless of execution order, by the nature of FIFOs, right? Ie. lets say our non-empty fifo contains {1, 2, 3} and a process is attempting to write a 6, and another process is attempting to read. Then if write executes first the fifo will look like {6, 1, 2, 3} and then when the read executes it will look {6, 1, 2}. Whereas if the read executed first it would look like {1, 2} and then after the write {6, 1, 2}. In either case the reader got 3, and the fifo looks the same at the end. Now, I can see that if the fifo were full, and the writer executed first, it would have to wait for the next delta cycle, whereas if the reader executed first, it wouldn't ( and vice versa on an empty fifo ), but since the notify is for SC_ZERO_TIME, I don't see why this matters too much.
  • Create New...