Jump to content


  • Content Count

  • Joined

  • Last visited

About sheridp@umich.edu

  • Rank

Recent Profile Visitors

345 profile views
  1. sheridp@umich.edu

    SystemC generator

    Hi Mathieu, I know this post is over 5 years old, but did you ever find something like this?
  2. sheridp@umich.edu

    Timing Annotation

    Hook, Yes, this is correct. The initiator calls nb_transport_fw in the target with a delay of 10ns timing annotation. In order to respect the timing annotation, the target placed the transaction into a payload_event_queue (PEQ) and it will emerge from the PEQ after 10ns (this is done by calling peq_with_get.notify(trans, delay) where the delay argument is what was passed into nb_transport_fw). Similarly at time = 125ns, the target calls nb_transport_bw to the initiator with a delay of 10ns. The initiator does the same thing--puts the transaction into its own PEQ with a delay of 10ns.
  3. sheridp@umich.edu

    AT Examples

    Is it just that it is inappropriate to try to model at this high level of abstraction--like locking a bus without an explicit arbiter--using AT?
  4. sheridp@umich.edu

    Generic Payload Extensions

    I see, thanks for the info. Regards, Patrick
  5. sheridp@umich.edu

    AT Examples

    Basically, what I'm interested in is a multi-producer, multi-consumer model over a shared bus. When not using TLM, I would implement it as follows: class top : sc_module { sc_mutex bus; producer p1("p1", bus); producer p2("p2", bus); consumer c1("c1"); consumer c2("c2"); p1.port(c1.port); p2.port(c2.port); }; class producer : sc_module { public: sc_mutex& bus; sc_fifo_out<transaction_object*> port; producer(const sc_core::sc_module_name &name_, sc_mutex& bus_) : bus(bus_){ SC_THREAD(thread); } void thread(){ while(true){ wait(production_time, SC_US); auto trans* = new transaction_object(); bus.lock(); //ensure exclusive access to the bus wait(transmission_time, SC_SEC); port.write(trans); //potentially block if the consumer thread is full; if we don't want to block access to the bus the whole time, we could wait on free space in the consumer fifo before locking the bus. bus.unlock(); } } } class consumer : sc_module { public: sc_core::sc_export<sc_fifo<transaction_object*>> port; sc_fifo<transaction_object*> fifo; consumer(const sc_core::sc_module_name &name_){ SC_THREAD(thread); port(fifo); } void thread(){ while(true){ auto trans = fifo.read(); wait(consumption_time, SC_SEC); } } } * Please note, I didn't try to compile the above code and I'm sure there are some syntax bugs; in addition to the obvious memory leak. This works great, and I think it's easy to understand, but (1) it doesn't follow TLM2.0 standards and (2) I think it might be significantly slower than TLM2.0 AT style. The problem I'm having is just that TLM2.0 AT style is just so counter-intuitive for me with callbacks, phases, and timing annotations. Further, I agree with the advice given by David Black on this forum that SC_THREADs are much more intuitive to understand than SC_METHODs, but I don't see how to implement AT style in a straightforward manner using SC_THREADs. A good example, is the SimpleBusAT (given in systemc-2.3.2/examples/tlm/common/include/models/SimpleBusAT.h) -- this looks far more complex than using an sc_mutex and sc_fifo, but I'm wondering if there is a simple design pattern for AT to follow that could help ease model development.
  6. sheridp@umich.edu

    Generic Payload Extensions

    If a third party model is designed to take a tlm_generic_payload, and you derive from tlm_generic_payload, then it should handle your transaction object correctly, ignoring any additional fields that may be defined on the object. If a target wants to make use of those additional fields, then it could dynamic_cast the tlm_generic_payload* to your derived type* (using C++ RTTI to ensure that it actually has a derived_type object) . I understand that the standard was instead developed around extensions, but I'm wondering what was the motivation for going in this direction.
  7. sheridp@umich.edu

    AT Examples

    Does anyone know of any open source approximately timed TLM2.0 models? While there a few examples included in the SystemC download, I am looking for examples of real hardware models to get a better understanding of how model implementers actually use the base protocol (SC_METHOD vs SC_THREAD; 1, 2 or 4 phase, timing annotation, peq_with_get vs peq_with_cb_and_phase, etc.).
  8. Section 14.2 of the IEEE STD 1666-2011 states I am wondering why the extension mechanism is preferred since you could simple ignore the additional fields provided in a derived class.
  9. sheridp@umich.edu

    Segfault on sc_process_handle.terminated_event() in sc_event_and_list

    Hi Philipp, Sure. Can you point me in the direction of the official SystemC repo (I currently just downloaded the source from http://accellera.org/downloads/standards/systemc). I found the following: https://github.com/systemc/systemc-2.3 , which looks official, but it's not version 2.3.2. On the other hand, I found https://github.com/avidan-efody/systemc-2.3.2, but it looks unofficial. If there is no official repo that includes 2.3.2, I can just create one from the downloaded source. Thanks, Patrick
  10. sheridp@umich.edu

    Dynamic process creation using sc_spawn

    Actually, even if you're just modeling hardware, you might want to use sc_spawn in the case that you have a run-time configurable number of hardware blocks. SC_THREADS have to be declared at compile time.
  11. sheridp@umich.edu

    Dynamic process creation using sc_spawn

    Hi Sumit, I think when you are just modeling hardware with fixed modules, there probably isn't a whole lot of uses for sc_spawn (I'm sure others could find some uses). However, in my models, I do a lot of modeling of firmware state machines at a pretty high level of abstraction (ie. I'm writing models before any firmware is available, so the models are not using something like instruction set simulators). In these models, a command is received and then a number of different actions are taken in sequence throughout the lifetime of the command, sometimes waiting on hardware or other threads, etc.). The easiest way to model this is as an sc_thread: void run(Command received_command) { do_x(received_command); wait(10, SC_US); do_y(received_command); mutex.lock(); do_z(received_command); mutex.unlock(); ... } The thing is, at compile time, I have no idea how many commands I'll receive, this depends on run-time inputs to the model. So I typically have a fixed module using SC_THREAD receiving the commands, but then for each command it receives, it kicks off a run using sc_spawn. SC_THREAD(receiver); void receiver() { while(true) { auto command = fifo.read(); sc_spawn(sc_bind(&run, command)); } }
  12. sheridp@umich.edu

    Segfault on sc_process_handle.terminated_event() in sc_event_and_list

    Update: I wrote the following in an attempt to replicate the start_of_simulation sc_spawn segfault and found that it's pretty hard to re-create; the following program appears to do it pretty reliably though: #define SC_INCLUDE_DYNAMIC_PROCESSES #include <systemc.h> #include <iostream> class Test : public sc_core::sc_module { SC_HAS_PROCESS(Test); public: void run(){ std::cout << "Spawned" << std::endl; } void run2(){ for(int i = 0; i < 10; ++i){ wait(10, SC_US); sc_core::sc_spawn(sc_bind(&Test::run, this)); } } void run3(){ for(int i = 0; i < 100; ++i) wait(10, SC_US); } Test(const sc_core::sc_module_name &name_){ SC_THREAD(run2); SC_THREAD(run3); } void start_of_simulation(){ for(int i = 0; i < 10; ++i) sc_core::sc_spawn(sc_bind(&Test::run, this)); } }; int sc_main(int argc, char *argv[]){ Test test("test"); sc_start(); } // g++ -O3 -std=c++14 -I/opt/systemc-2.3.2/include -L/opt/systemc-2.3.2/lib-linux64 -lsystemc test.cpp Also, if I leave everything the same but switch out start_of_simulation for end_of_elaboration, I still get the segfault. The bug depends on the order of additions to the process table (because the static process' destructor will walk the table until it finds a match). If I comment out either run3 or the spawn in run2, I don't see the segfault. Hopefully this is helpful in finding a solution.
  13. sheridp@umich.edu

    Segfault on sc_process_handle.terminated_event() in sc_event_and_list

    Also, one small modification to your suggested code: I noticed that if you wait on an empty sc_event_and_list, it will never progress (which kind of makes sense--with no events, there's nothing to wake the thread). With that in mind, I would change wait(processes_running); to if(processes_running.size()) wait(processes_running); On the other hand, it might be nice for debuggability if waiting on an empty sc_event_and_list didn't wait at all or threw an exception.
  14. sheridp@umich.edu

    Segfault on sc_process_handle.terminated_event() in sc_event_and_list

    Hi Philipp, Thanks for your reply. I like your suggestion of building the sc_event_and_list immediately before waiting on it and checking that processes are still alive while doing so; I'm sure I would have hit hard-to-debug problems if I had waited on processes that had already terminated, even if I kept the handles around. With regards to the second problem, I can confirm I am using SystemC 2.3.2. I have not tried spawning at end_of_elaboration; I've refactored my code so my spawn occurs in another location--neither end_of_elaboration, nor start_of_simulation. If I get a chance, I will run that experiment. In any case, I agree that it seems the current implementation has a bug by not removing the dynamic thread, created in start_of_simulation, from the process table. The logic needed in the destructor to identify these particular dynamic threads might not be worth it since one could simply use the SC_THREAD macro to statically spawn them at the start (potentially with a helper function if dynamic arguments need to be bound). The fix could be as simple as documenting that sc_spawn'ing in start_of_simulation is not allowed.
  15. sheridp@umich.edu

    Debugging Multi threaded program in SystemC

    David, thank you for running those experiments showing a <3% difference. As I model a lot of software state machines, I greatly prefer to use sc_threads, but have always wondered if the context-saving nature of threads makes my programs slower. In profiling, I see that memcpy is called often and seems to be a factor in runtime. With that in mind, is an sc_thread context switch with lots of local variables more expensive than if I instead put those variables as class members so that they are not local to the thread?