Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 08/26/2018 in all areas

  1. 2 points
    Please be aware, that an sc_and_event_list does not imply that the events in the list are triggered at the same time. I would suggest to keep the only the clock sensitivity and act on the triggers in the body of the method instead: SC_METHOD(func2); sensitive << clk.pos(); dont_initialize(); // ... void func2() { if( nreset.posedge() ) { // nreset went high in this clock cycle // ... } } Alternatively, you can be sensitive to nreset.pos() and check for clk.posedge() (as a consistency check), if you don't have anything else to do in the body of the method. With this approach, you might be able to avoid unnecessary triggers of the method. Side note to Eyck: There's a small typo in the example above, which should should use "&=" to append to an sc_event_and_list. ev_list &= nreset;
  2. 2 points
    The issue is likely caused because you access a port (via -> or for example calling functions like .read()) already inside the module constructor. You should only access ports after binding has completed, this means from within a SystemC process or in end_of_elaboration() / start_of_simulation() callbacks. Hope that helps, Philipp
  3. 2 points
    Actually, you can start a sequence in any phase. It is more important to understand the domain/scheduling relationships between the task based (i.e. runtime) phases. UVM undergoes a number of pre-simulation phases (build, connect, end_of_elaboration, start_of_simulation) that are all implemented with functions. Once those are completed, the task based phases begin. The standard includes two schedules. One is simply the run_phase, which starts executing at time zero and continues until all components have dropped their objections within the run_phase. The other schedule contains twelve phases that execute parallel to the run phase. They are: pre_reset, reset, post_reset, pre_config, config, post_config, pre_main, main, post_main, pre_shutdown, shutdown, and post_shutdown. They execute in sequence. Every component has the opportunity to define or not define tasks to execute these phases. A phase starts only when all components in the previous phase have dropped their objections. A phase continues to execute until all components have dropped their objections in the current phase. Many companies use the run_phase for everything because there are some interesting issues to consider when crossing phase boundaries. In some respects it may be easier to use uvm_barriers for synchronization. Drivers and monitors (things that touch the hardware) are usally run exclusively in the run_phase, but there is nothing to prevent them also having reset_phase, main_phase, etc...
  4. 1 point
    Eyck

    bind multi ports to other port.

    Another option would be to use a resolved signal and connect all output ports to it. But this is already about techincal implementation options. The question to me is: what would you like to model? Is this the right way to model the intend? Best regards
  5. 1 point
    You have controversial requirements: a) put stored value when enable == 1 , which sounds like a dff with output_enable b) put input to output when enable == 1, which sounds more like a latch Anyway, in both cases you will need to make process sensitive to enable signal. And usually such low-level logic is modeled with SC_METHODs. In SystemC context "register" usually means some memory-mapped CSR on TLM bus 🙂
  6. 1 point
    If you want immediate communications between threads, then you should use regular SC_THREADs and wait on event, like wait(some_signal.value_changed_event());
  7. 1 point
    You would want to use an sc_event_and_list. See IEEE1666 section 5.8. As its intended use is with next_trigger() and wait() you would need to move the sensitivity into your method. So the constructor part becomes SC_METHOD(func2); and func2 should something like (snippet of your module): sc_core::sc_event_and_list ev_list; void end_of_elaboration(){ ev_list |= clk.posedge_event(); ev_list |= nreset; } void func2(){ next_trigger(ev_list); // your code here ... }
  8. 1 point
    There are several ways to do this: you might use a different way to carry signals which are more TLM like. One option would be to use tlm_signal. The other common option is if your CPU writes into the register of your interrupt controller via TLM it carries the delay which is essentially the offset of the CPU local time to the SystemC kernel time. If you here just break the quantum and call a wait() so that the SystemC kernel can sync up and the signal change propagates you should be fine.
  9. 1 point
    Eyck

    tracing waveforms in tlm

    You may use the tlm recorder of the SystemC components lib (SCC, esp. here: https://git.minres.com/SystemC/SystemC-Components/src/branch/master/incl/scv4tlm). This writes into a text database (the SCV default). You can view this e.g. with Impulse (https://toem.de/index.php/projects/impulse) or SCViewer (https://git.minres.com/VP-Tools/SCViewer, binaries here: https://github.com/Minres/SCViewer/releases). Basically you instantiate scc::tlm2_recorder_module and connect it's initiator and target sockets. to your target and initiator socket. Your sc_main should then look like: int sc_main(int argc, char *argv[]) { scv_startup(); scv_tr_text_init(); scv_tr_db db("my_db.txlog"); scv_tr_db::set_default_db(&db); dut_mod dut("dut"); sc_start(1.0, SC_MS); return 0; } The database is closed upon destruction of the db object. Best regards
  10. 1 point
    Roman Popov

    Systemc performance

    Real-life simulation performance usually depends a lot on modeling style. For high-level TLM-2.0 models share of simulation time consumed by SystemC primitives is usually much lower, comparing to time consumed by "business logic" of models. Efficiency of simulation kernel (like context switches and channels) is much more important for low-level RTL simulations.
  11. 1 point
    Philipp A Hartmann

    timer with systemC

    The match will occur (almost) at the "correct" point in time during the simulation. However, if you sample the value from an unrelated process, there might be some process evaluation ordering dependency (i.e. whether the update_method had already been run). It depends on your requirements, whether this might be an issue. If you do the checks outside of the simulation, i.e. between sc_start calls, you would need to complete the deltas (as per the loop sketched above) before every check. You cannot call sc_start during end_of_simulation.
  12. 1 point
    From the snippets you provide it looks ok. Assuming that bus_mutex is a sc_mutex this is you problem. sc_mutex does not do arbitration. It selects randomly which waiting thread to grant the lock (actually the next activated process base on an even notification) . But what you expect is arbitration. So either you write an arbiter or you may use and ordered semaphore with an initial counter value of 1. You may find an implementation here: https://git.minres.com/SystemC/SystemC-Components/src/branch/master/incl/scc/ordered_semaphore.h The semaphore grant access based on a queue. So eventually you get first-comes-first-serves (FCFS) behavior, but all requestors have equal priority. Best regards
  13. 1 point
    Philipp A Hartmann

    timer with systemC

    You can run the delta cycles at the current time until everything settles: while( sc_pending_activity_at_current_time() ) sc_start( SC_ZERO_TIME );
  14. 1 point
    Eyck

    using clocks in tlm

    Adding to Davids answer: you can supply clocks to the modules having initiator and/or target sockets. This is can be done using the usual signal/sc_in/sc_out means as David mentioned. There is also the option to have 'tick-less' clock as David presented many years ago (http://www.nascug.org/events/12th/nascug12_david_black.pdf). This can also be achieved by distributing a signal just carrying the clock period as sc_time and caculating the time points in the modules. This way it is even possible to model timers or PWM (e.g. https://git.minres.com/DBT-RISE/DBT-RISE-RISCV/src/branch/develop/platform/incl/sysc/SiFive/pwm.h). This modelling style avoids the many context switches implied by toggling clocks. All this holds true for LT and PV modelling, if you think about AT modelling the story becomes different. But this is as usual: you trade speed for accuracy. BR
  15. 1 point
    Unfortunately, there is currently no way to obtain the name of the affected signals/variables in this message. You can suppress the warning via: sc_core::sc_report_handler::set_actions(sc_core::SC_ID_TRACING_VCD_DUPLICATE_TIME_, sc_core::SC_DO_NOTHING);
  16. 1 point
    David Black

    How to Method work with event?

    Notify (either case) is non-blocking, so your call to notify followed by initialize will happen. Then after you return, the notified element(s) may execute. Notify() implies execution will be in the same delta-cycle; whereas, notify(SC_ZERO_TIME) postpones to the next one and allows other processes in the current delta-cycle to complete. Take a look at <https://github.com/dcblack/SystemC-Engine/blob/master/Engine_v2.4.pdf>.
  17. 1 point
    Philipp A Hartmann

    sensitivity list

    In SystemC 2.3.2 and later, you can use the sc_event::triggered() member function to query, if an event was triggered (and thus might have caused your method to run): if( event1.triggered() ) { std::cout << "event1 got triggered"; } if( event2.triggered() ) { std::cout << "event2 got triggered"; } Please note that if both events were triggered in/for the same evaluation phase, your method might well be run only once.
  18. 1 point
    At first glance, you can call b_transport from C++, but it must be C++ that is inside a SystemC process during simulation phase if the b_transport call invokes sc_core::wait(). You can read the details in the IEEE-1666-2011 specification. Or perhaps you should signup for the Doulos SystemC TLM-2.0 course and get expert hands-on training. Fundamentally, b_transport is a simple function call; however, it may use SystemC semantics to accomplish its work. Hence you should also be knowledgable on SystemC SC_THREADs (may not be used in SC_METHODs). Technically, you could write a b_transport method in the target that did not call sc_core::wait, which is desirable anyhow. If you did this, then you may call from pretty much anywhere; however, SystemC is not thread-safe without special precautions. In any event, there is a lot to learn about the subtleties of the generic payload and extensions too.
  19. 1 point
    Eyck

    why is,what is, and how is tlm used ?

    Well, in TLM1.0 there is even a tlm::tlm_fifo channel which provides something similar what you use. The sockets defined in TLM2.0 are more geared towards memory-mapped busses and provide facilities to model for speed (DMI, loosly-timed blocking interfaces) or accurracy (approximately-timed non-blocking interfaces). To achive this with pure SystemC provided classes takes some effort and it ends to be proprietary... BR -Eyck
  20. 1 point
    From SystemC standard: SystemC requires that you add sc_module_name as a parameter to module constructors, like this: MULT(sc_module_name, int var_a, int var_b); MULT(sc_module_name); This is a hack that allows SystemC kernel to track lifetime of constructor.
  21. 1 point
    If you code your FFT as a pure C++ function, Vivado HLS will generate inputs and outputs automatically from function parameters and based on INTERFACE pragmas. And you can import generated RTL into Vivado IP integrator. Check out Xilinx documentation, and ask on Xilinx forums. Please also note that you can implement FFT in a various ways (with different micro-architectures), and achieve various performance vs area numbers. And HLS also requires quite a lot of learning to achieve good results.
  22. 1 point
    Xilinx HLS tools have very limited SystemC support, you should probably code your FFT in pure C++.
  23. 1 point
    You paint a very bleak and incorrect picture of the HLS tool. I will suggest that you need to get some training on its use. Xilinx have many examples and their documentation is quite good. Document UG902 clearly documents the HLS math library which supports all manner of synthesizable operations. For instance: Trigonometric Functions: acos, atan, cospi, acospi, atan2, sin, asin, atan2pi, sincos, asinpi, cos, sinpi, tan, tanpi Hyperbolic Functions: acosh, asinh, cosh, atanh, sinh, tanh Exponential Functions: exp, exp10, exp2, expm1, frexp, idexp, modf Logarithmic Functions: ilogb, log, log10, log1p Power Functions: cbrt, hypot, pow, rsqrt, sqrt Error Functions: erf, erfc Gamma Functions: lgamma, lgamma_r, tgamma Rounding Functions: ceil, floor, llrint, llround, lrint, lround, nearbyint, rint, round, trunc and that's only a few. Perhaps your grasp of C++ and what can or cannot be synthesized is limited. For instance, dynamically allocated memory is forbidden because it is not reasonable to expect silicon to grow new logic during operation. Please read the fine manual (RTFM).
  24. 1 point
    maehne

    Making a port optional

    If you don't use the member functions added that were added for convenience to `sc_in`, `sc_out`, and `sc_inout` to, e.g., call `read()`, `write()`, and the event member functions via the `.` operator than via the corresponding member function in the interface accessed via the `->` operator, you might be able to avoid entirely the derivation of new port classes. Instead, you could simply use a template alias, which was introduced with C++'11: template<typename T> using sc_in_opt = sc_core::sc_port<sc_signal_in_if<T>, 1, SC_ZERO_OR_MORE_BOUND>; template<typename T> using sc_inout_opt = sc_core::sc_port<sc_signal_inout_if<T>, 1, SC_ZERO_OR_MORE_BOUND>; template<typename T> using sc_out_opt = sc_core::sc_port<sc_signal_inout_if<T>, 1, SC_ZERO_OR_MORE_BOUND>; If you want to also provide all member functions of `sc_in`, `sc_out`, and `sc_inout`, you will have to derive from the `sc_port` class and implement the full interface as defined in IEEE Std 1666-2011.
  25. 1 point
    You cannot not check typeid() against a string as this is compiler dependend. What you shoudl do is void write_out{ if(typeid(T) == typeid(sc_dt::sc_logic){ out.write(SC_LOGIC_Z); }else{ out.write(0); } } } But actually this is more a C++ related question, in my experience Stackoverflow is a good source of help. Cheers
  26. 1 point
    Eyck

    TLM extension

    Actually this is done in the desrtuctor of the tlm_generic_payload. This part calls for all extensions the free() function. So if out is handled properly destroyed all extensions are destroyed as well. The other option is to call free_all_extensions() explicitly which also calls free() for all extensions as well as for auto extensions (those might be registered when a memory manager for the generic payload is used, usually in AT style modelling using the non-blocking interfaces). HTH
  27. 1 point
    Eyck

    sc_uint and unsigned int

    unsigned int has always the length defined by the underlying platform while sc_uint<> lets you specify the exact bit with of the type. In your case case I would use 'unsigned int' as it is faster and has less overhead. Best regards
  28. 1 point
    You should start with learning your synthesis tool documentation. C++/SystemC synthesis is very tool-specific.
  29. 1 point
    David Black

    Temporal Decoupling

    I believe it would be fair to say that there are no universally accepted best practices, and the system design will dictate much of the implementation. In the case of shared memory, the target would need to have some idea that the memory of interest is shared. So you would need somewhere in the system to have a mapping. It might be the entire device, or a memory map might exist as a configurable object. When the target receives a read request for shared memory, it would then synchronize in order to be certain that any writes from the past are completed in other initiators. Depending on your design, you might be able to reduce the number of synchronizations if you can know apriori the nature of the sharing. E.g. if a block of memory was shared using a mutex, then synchronization might be limited to the mutex with the assumption that if you own the mutex, then the block is not written to by other initiators. This of course has some risks in the face of software defects.
  30. 1 point
    swami-cst

    TLM_INCOMPLETE_RESPONSE

    You seem to be probing the transaction response, before driving your transaction :) Since the default transaction status is TLM_INCOMPLETE_RESPONSE, you are getting that message. Try moving the call to b_transport before checking the trans.is_response_error() ...
  31. 1 point
    The release contains a file docs/scv/scvref/vwg_1_0e.pdf which (sort of) clarifies this.
  32. 1 point
    Hi Maxim, After reading some SCV documentation, it looks like we're not allowed to use smart pointers in your preferred way. E.g. we need to use "addr()" (without specific member functions like "range(int, int)") as a basis for building the expression we are using in a later stage. In your case, a practical solution would be to have no constraint on the generated address but to mask the 2 LSB after generation. -- greetz, Bas
  33. 1 point
    Hi Maxim, "addr" is a pointer, so you need to access its fields and methods by using operator-> : SCV_CONSTRAINT(addr->range(1,0) == 0); -- greetz, Bas
  34. 1 point
    Minimal example of the actual underlying issues posted here: http://forums.accellera.org/topic/6232-killing-a-process-with-an-included-sc_event_andor_list/
  35. 1 point
    Eyck

    simple socket

    Just looked further: there is a typo in case 1. It should read //bind for(int i=0;i<3;i++){ objA->initiator_socket[i]->bind(*(objB->target_socket[i])); } as you use an array of pointers. Again, sc_vector eases your life: //Model A sc_core::sc_vector<tlm_utils::simple_initiator_socket_tagged<ModelA>> initiator_socket; ... // Model B sc_core::sc_vector<tlm_utils::simple_target_socket_tagged<ModelB>> target_socket; ... //bind for(int i=0;i<3;i++){ objA->initiator_socket[i].bind(objB->target_socket[i]); } The same applies to case 2: //bind objA->initiator_socket1->bind(*(objB->target_socket2)); objA->initiator_socket2->bind(*(objB->target_socket3)); objA->initiator_socket3->bind(*(objB->target_socket1)); Best regards
  36. 1 point
    Eyck

    Read in customized structs

    Where did you check the values of p1 and p2? write() only schedules the values to be written, you will see the actual value in the next delta cycle. Best regards
  37. 1 point
    Roman Popov

    How to connect array of ports?

    Usually you should use sc_vector instead of array. The problem with array is that names of signals and ports in array will be initialized to meaningless values (port_0 ... port_1 ). If you still want to use arrays, then bind with a loop. Here is example, both for array and sc_vector. #include <systemc.h> static const int N_PORTS = 4; struct dut : sc_module { sc_in<int> in_array[N_PORTS]; sc_vector<sc_in<int>> in_vector{"in_vector", N_PORTS}; SC_CTOR(dut) {} }; struct test : sc_module { dut d0{"d0"}; sc_signal<int> sig_array[N_PORTS]; sc_vector<sc_signal<int>> sig_vector{"sig_vector", N_PORTS}; SC_CTOR(test) { for (size_t i = 0; i < N_PORTS; ++i) { d0.in_array[i](sig_array[i]); } d0.in_vector(sig_vector); } };
  38. 1 point
    Philipp A Hartmann

    tlm_socket_base_if

    Hi Guillaume, I agree, that the new pure-virtual functions in tlm_base_(initiator/target)_socket are not compliant to IEEE 1666-2011. However, I'm curious what use case you have to use these classes directly instead of inheriting from tlm_(initiator/target)_socket, where the functions are implemented? Regarding the implementation on the base socket level, I suggest to add a typedef to the fw/bw interface classes, and use these typedefs in the socket base class then. Something like: template <typename TYPES = tlm_base_protocol_types> class tlm_fw_transport_if // ... { public: typedef TYPES protocol_types; }; // tlm_base_target_socket virtual sc_core::sc_type_index get_protocol_types() const { return typeid(typename FW_IF::protocol_types); } Theoretically, these types could mismatch between FW_IF and BW_IF in manual socket implementations. Therefore, I'd suggest to refer to the FW_IF in the target and BW_IF in the initiator socket. Greetings from Duisburg, Philipp
  39. 1 point
    Roman Popov

    using gtkwave

    In the working directory (active directory when you launch the executable). Search for "and_gate.vcd"
  40. 1 point
    Roman Popov

    using gtkwave

    I've removed next_trigger and simulated your code. Check attached waveform. Check how to model delay line here:
  41. 1 point
    AmeyaVS

    Behavioral XOR Gate with Delay

    Hello @re1418ma, You can look at this example: http://forums.accellera.org/topic/5678-clock-to-q-propagation-delay/?do=findComment&comment=13657 Or this one which shows how to add delay in full adder: http://forums.accellera.org/topic/5715-delaying-simulated-execution/?do=findComment&comment=13844 Hope it helps. Regards, Ameya Vikram Singh
  42. 1 point
    Roman Popov

    SystemC 2.3 Pretty-Printer

    Hard to say without debugging. In source code I see they are registered using "RegexpCollectionPrettyPrinter", probably sc_dt::sc_int<(.*)> matches sc_signal<sc_int<2>> ? By curiosity, may I know how you manage signals/ports? Signal has m_cur_val and m_new_val fields, storing current and next signal value. So they are pretty-printed as "m_cur_val -> m_new_val". Ports are just decorated pointers, signal port has m_interface field holding a pointer to signal, so pretty-printer dereferences it, casts to dynamic type and prints it the same way as signal. Actually you can do much more with GDB Python API. I have even written SystemC to Verilog converter using GDB (Generates complete netlist, but without process bodies). Commercial SystemC interactive simulators/debuggers are also based on GDB AFAIK.
  43. 1 point
    Hello everyone, first of all I apologize if the post is too big and I know sometimes people get discouraged to read big posts. On the other hand I spent quite some time trying to make the post as clear as possible for the reader. So please do not get discouraged :). My SystemC version: SystemC 2.3.1 My operating System: Ubuntu 16.04 I am trying to understand how SystemC simulator works, and for that I ran the following code: SC_MODULE(EventNotifications) { sc_event evt1; void p1() { evt1.notify(10, SC_NS); evt1.notify(5, SC_NS); evt1.notify(); evt1.notify(0, SC_NS); wait(10, SC_NS); evt1.notify(5, SC_NS); wait(10, SC_NS); } void p2() { wait(10, SC_NS); evt1.cancel(); evt1.notify(); wait(10,SC_NS); } void p3() { cout << "evt1 is activated at " << sc_time_stamp() << endl; } SC_CTOR(EventNotifications){ SC_THREAD(p1); SC_THREAD(p2); SC_METHOD(p3); sensitive << evt1; dont_initialize(); } }; I referred to the SystemC Language Reference Manual 2.0.1: http://homes.di.unimi.it/~pedersini/AD/SystemC_v201_LRM.pdf and in my explanation down below, I used the following abbreviations: R = {} - list of runnable processes, D = {} - list of processes that have been notified using delta notifications, T = {} - list of processes where timed notification has been used e.g. event.notify( >0, SC_NS) 2.4.1 Scheduler Steps at page 6 from the SystemC Language Reference Manual 2.0.1 was used for the following reasoning of how this code works: Initialize phase: We initialize all the processes that are ready to run. Process p3 at the beginning will not be runnable due to dont_initialize command. So only p1 and p2 processes runnable, as a result R = {p1, p2} at the end of initialize phase. Now we go to the evaluation phase. We start with simulation time equal to 0ns. Simulation time = 0 ns Evaluation phase (delta cycle = 0): We have at the beginning R = {p1, p2}, D = {} (empty list) and T = {}. Let's say scheduler decides to execute p1 first, so p1 gets removed from R, effectively R = {p2}. Then we execute 1st timed notifications evt1.notify(10, SC_NS), after that we have evt1.notify(5, SC_NS), since 5ns is earlier than 10ns, only 5ns will be remembered so we have T = {p3}. Next statement is evt1.notify() which is immediate notification, and will overwrite the previous notification evt1.notify(5, SC_NS). Immediate notification is put into the list p3, R ={p3}, and T = {}. Next statement is evt1.notify(0, SC_NS), so p3 will be put in the list D. So now we have R = {p3}, D = {p3}, T ={}. Question 1: if I swapped two statements evt1.notify(0, SC_NS) and evt1.notify() here, will the delta notification will be removed? In my opinion only evt1.nofity() will be remembered: From page 128 of the manual: "A given sc_event object can have at most one pending notification at any point. If multiple notifications are made to an event that would violate this rule, the “earliest notification wins” rule is applied to determine which notification is discarded." As a result I would have R = {p3, p2}, D = {}, T = {}. Now we encounter the wait(10, SC_NS) and p1 is put to wait. Question 2: Since I we have wait(10, SC_NS), does that mean that the process p3 will be put in separate list/queue of sleep processes? Let's call it list S, so we would have S = {p1} effectively? Next let's say scheduler decides to run p2, so we remove p2 from the R list and we have R = {p3}. There we encounter wait(10, SC_NS), and p2 gets into S list, S = {p1, p2}. Now we have R = {p3} and p3 gets executed, so immediate notification gets executed at simulation time 0 ns as 1st console output indicates. Now method p3 exits, and list R is empty, R = {}, so we go to update phase. Update phase (delta cycle = 0): Nothing to be updated. We just go to the next delta cycle phase. Next delta cycle phase(delta cycle = 0): We increment delta cycle, and check all the contents of list D and put them in the list R. In our case D = {p3}, thus R = {p3}. Now we go back to evaluation phase. Evaluation phase(delta cycle = 1): We run only p3, here so the delta notification happens at simulation time 0 ns + 1 delta cycle. R = {}, we go to the update phase. Update phase (delta cycle = 1): Nothing to be updated. Go to next delta cycle phase. Next delta cycle phase(delta cycle = 1): We increment delta cycle, but since D = {}, we go to the increase simulation time phase. Increase simulation time phase (delta cycle = 2): From the page 6: "If there are no more timed event notifications, the simulation is finished. Else, advance the current simulation time to the time of the earliest (next) pending timed event notification." Now back to my Question 2, since we T = {}, that would mean that we have no timed event notifications, and based on reference manual simulation should be finished, which is not the case here if you run the code. So from my understanding, the processes that were called with wait operation, will not be put in the "list of sleep processes" but instead they will be put to either list T or D. I think in case wait(>0, SC_NS), process gets put into the T list, and if wait(0, SC_NS) is called process should be put into the D list. So in our case T = {p1, p2}? We increase simulation time to the earliest 10 ns, and contents of list T = {p1, p2} are put into the list R = {p1, p2}. and we go to the next evaluation phase. Simulation time = 10 ns: Evaluation phase (delta cycle = 0): Here we can either run p1 or p2. Let's say p2 is run first, and we encounter evt1.cancel(), since there are no pending events nothing will happen. Then we execute evt1.notify(), and p3 gets into the list R, so R = {p1, p3}. Then wait encountered, so T = {p2}. Now let's say scheduler decides to execute p3, and then immediate notification happens at simulation time of 10 ns. Now R = {p1}, so p1 gets executed and there we have evt1.notify(5, SC_NS), so p3 gets into the list T = {p2, p3}. Then we execute wait(10, SC_NS), and p1 sleeps again. So T = {p2, p3, p1}. Since R = {0}, we go to update phase. Update phase (delta cycle = 0): Nothing to be updated, so we go to the next phase. Next delta cycle phase (delta cycle = 0): We increment delta cycle. Nothing in list D, so we go to the next phase. Increase simulation time phase (delta cycle = 1): We put contents of T into R, thus R = {p2, p3, p1}, and we increment time to the earliest pending notification, since we had evt1.notify(5, SC_NS) and threads slept for 10 ns, we chose 5 ns. So simulated time is increased to 15 ns. We again go to the evaluation phase. Simulation time = 15 ns: Evaluation phase (delta cycle = 0): Here R = {p2, p3, p1}, so let's say we execute p3 first, as result timed notification evt1.notify(5, SC_NS), happens at simulated time 15 ns. Now R = {p2, p1}, and p1 executes, since nothing after last wait statement thread terminates. Same situation for p2, so p2 terminates. R ={} go to next phase. Update phase (delta cycle = 0): Go to next phase. Next delta cycle phase (delta cycle = 0): Delta cycle updates, since D = {}, we go to the next phase Increase simulation time phase (delta cycle = 1): Since T = {}, nothing to be simulated and simulation ends. So this would explain the following result I got outputted on the console: evt1 is activated at 0 s evt1 is activated at 0 s evt1 is activated at 5 ns evt1 is activated at 10 ns I tried to check my assumption that when wait(0, SC_NS) gets called in some process, the process will be put in the D list. So I ran this code: SC_MODULE(DeltaTimeWait) { void p1(void) { while (true) { cout << "p1: " << sc_time_stamp() << endl; wait(0, SC_NS); } } void p2(void) { while (true) { cout << "p2: " << sc_time_stamp() << endl; wait(0, SC_NS); } } SC_CTOR(DeltaTimeWait) { SC_THREAD(p1); SC_THREAD(p2); } }; There is also one thing I noticed. For example if I change the order of the thread registration in the constructor of the 1st code, and having SC_THREAD(p2) before the SC_THREAD(p1), I get different result. SC_CTOR(Task5_d){ SC_THREAD(p2); SC_THREAD(p1); SC_METHOD(p3); sensitive << evt1; dont_initialize(); } I get the following result: evt1 is activated at 0 s evt1 is activated at 0 s evt1 is activated at 10 ns I am not sure if my reasoning for this result is correct. So I think that we get his output due to reason that at the point where simulation time was 10 ns, we had two choices, we could either schedule p1 or p2 first. Simulation time = 10 ns: Evaluation phase (delta cycle = 0): At this point as I have mentioned earlier, we can either run p1 or p2 first. And in the first case we assumed p2 was run first. But if we assume now p1 will be run 1st instead of p2. So now p1 gets executed and statement evt1.notfiy(5, SC_NS) is encountered. As a result, process p1 gets into the list T and then we sleep the process. Now the process p2 gets scheduled, and the 1st line we encounter is evt1.cancel(), which as result would cancel pending notification evt1.notfiy(5, SC_NS) from process p1. After that evt1.notify() is executed which results p3 getting into R list. So p3 being only in the list R, we execute process p3, and evt1 is notified at simulation of time 10 ns. Question 3: How come that order of thread registration actually affects the order of the process being scheduled? I am not sure if my reasoning is correct, so I would appreciate your feedback, as I am only a beginner in SystemC. Looking forward to your feedback. Ivan.
  44. 1 point
    Eyck

    TLM transaction tracing

    Maybe a little bit late but there are socket implementations available which do trace tlm transactions int a SCV database. They can be found at https://github.com/Minres/SystemC-Components/tree/master/incl/scv4tlm and are used e.g. at https://github.com/Minres/SystemC-Components/blob/5f7387ab7e3dfc2ff6a7cac6fbe834ed7ec8ae36/incl/sysc/tlmtarget.h which in turn serve as building blocks in https://github.com/Minres/SystemC-Components-Test/tree/master/examples/simple_system The setup given by Kai is put into sysc::tracer where all the tracing setup (VCD & SCV) is impelemted. Best regards -Eyck
  45. 1 point
    David Black

    approximately timed

    IEEE 1666-2011 section 10.2 states: IEEE 1666-2011 section 10.3.4 states:
  46. 1 point
    HI Can any one provide some example for how to get virtual interface in sequence? I need to use the clk in sequece . Thanks praneeth
  47. 1 point
    apfitch

    user defined data type signal assignment

    Hi Zarie, that's what I meant by "you must wait a delta". If no time passes, a primitive channel does not update. Kocha's solution will work (adding a call to sc_start - in fact even sc_start(SC_ZERO_TIME) should work. regards Alan
  48. 1 point
    apfitch

    user defined data type signal assignment

    How do you know the assignment is not working? When are you printing out the assigned value? Remember that you must wait at least a delta for a primitive channel to update. There's more about user defined types and sc_signal here http://www.doulos.com/knowhow/systemc/faq/#q1 regards Alan
  49. 1 point
    David Long

    static and dynamic sensitivity

    Hi Amit, A process has static sensitivity if it contains one or more calls to wait(). The sensitivity is set before the simulation starts running, usually be using "sensitive" in its parent module's constructor. Dynamic sensitivity is where a process contains one or more calls to wait(a_time_value) or wait(a_named_event). It is dynamic because the conditions that cause a thread process to wake up change as each wait statement is executed when the simulation runs. Here is a very brief example (not tested): SC_MODULE(mod) { sc_in<bool> clk; sc_event e; void proc1() { wait(); //static sensitivity e.notify(); } void proc2() { while(1) { wait(e); //wait for event (dynamic) //do something wait(1,SC_NS); //wait for time (dynamic) //do something } SC_CTOR(mod) { SC_METHOD(proc1); sensitive << clk.pos(); //static sensitivity SC_THREAD(proc2); //no static sensitivity, runs during initialization until 1st wait reached } }; You can find further details in section 4.2 of the SystemC LRM (1066.2011). Regards, Dave
  50. 1 point
    Assuming you're using plain signal ports, you can use the event member function to check, whether a specific port has been triggered in the current delta cycle: sc_vector< sc_in< int> > in_vec; // ... SC_METHOD(proc); for( unsigned i= 0; i<in_vec.size(); ++i ) sensitive << in_vec[i]; // ... void proc() { for( unsigned i= 0; i<in_vec.size(); ++i ) if( in_vec[i]->event() ) std::cout << "in_vec[" << i << "] triggered." << std::endl; } Greetings from Oldenburg, Philipp
×
×
  • Create New...