Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 08/23/2018 in all areas

  1. 2 points
    The problem is, when you integrate RTL IP into Loosely-Timed VP that way, the whole simulator will have a performance of cycle-accurate model. Because clock generator will be always on, and Verilated model will be triggered even if it is idle. So don't try to boot Linux with such a simulator. If your RTL IP supports power gating or clock gating, it is a good idea to disable clock generation when RTL IP is turned off. In that case you don't pay for what you don't use: you can boot your OS quickly and then enable clock generator when you start to debug IP-specific driver code.
  2. 2 points
    Hello @kallooran, What version of SystemC library are you using? This issue has been fixed in the release of SystemC-2.3.2. You can find the latest release of SystemC-2.3.3 here: http://accellera.org/downloads/standards/systemc Hope it helps. Regards, Ameya Vikram Singh
  3. 2 points
    David Black

    Seeking Feedback on Datatypes

    Actually, it adds a lot of value. std::array can be passed by reference in a function call and the function can then determine the proper size of the array. This is much better than passing pointers, the C standard. You can also copy an array, which should be synthesizable, which reduces coding and greatly improves readability. It should be possible to implement some #include <algorithm>s on std::array too. Also, you can have bounds checking for additional safety; although, that aspect is probably not synthesizable. Additionally, constexpr should be quite helpful for the synthesis aspect.
  4. 2 points
    Hi, I'm not an implementer of the reference simulator but as far as I can judge the re-throw is used to find a more specific type of exception (since sc_elab_and_sim() just uses a catch-all) and uses sc_handle_exception() to convert it into an sc_report so it can be handled by the SystemC reproting system. Actually I agree it would be better to handle it directly in sc_elab_and_sim() but this would duplicate code. A side note rgd. debugging: if you use gdb there is a command 'catch throw' which stops execution right at the point where the (original) exception is thrown. This comes pretty handy in such cases. Best regards
  5. 1 point
    AmeyaVS

    What is SystemC library special in?

    Hello @Elvis Shera, It seems your SystemC library has been build with different C++ standard flag. Can you post the output of following commands?: # Compiler version you are using g++ -v # Library Properties: nm -C $SYSTEMC_HOME/lib-linux64/libsystemc.so | grep api_version Regards, Ameya Vikram Singh
  6. 1 point
    Eyck

    Read in customized structs

    Where did you check the values of p1 and p2? write() only schedules the values to be written, you will see the actual value in the next delta cycle. Best regards
  7. 1 point
    Philipp A Hartmann

    tlm_socket_base_if

    Hi Guillaume, I agree, that the new pure-virtual functions in tlm_base_(initiator/target)_socket are not compliant to IEEE 1666-2011. However, I'm curious what use case you have to use these classes directly instead of inheriting from tlm_(initiator/target)_socket, where the functions are implemented? Regarding the implementation on the base socket level, I suggest to add a typedef to the fw/bw interface classes, and use these typedefs in the socket base class then. Something like: template <typename TYPES = tlm_base_protocol_types> class tlm_fw_transport_if // ... { public: typedef TYPES protocol_types; }; // tlm_base_target_socket virtual sc_core::sc_type_index get_protocol_types() const { return typeid(typename FW_IF::protocol_types); } Theoretically, these types could mismatch between FW_IF and BW_IF in manual socket implementations. Therefore, I'd suggest to refer to the FW_IF in the target and BW_IF in the initiator socket. Greetings from Duisburg, Philipp
  8. 1 point
    SystemC is single threaded, you don't need std::mutex. However SystemC does not guarantee any order for processes executed in the same delta cycle. SystemC is a discrete-event simulator, and I suggest you to learn the concepts of simulation semantics from Paragraph 4 "Elaboration and simulation semantics" of IEEE-1666 SystemC standard. The purpose of primitive channels, like sc_signal, is to remove non-determinism by separating channel update request from actual update of value into two different simulation phases. But it only works if there is a single writing process during a delta cycle. In your case if initiator and responder threads are executed on same cycle, they both will read the same "current value" of signal. Initiator will request to set signal value to (valid = 1, ready = 0) and responder will request to set it to (valid = 0, ready = 1). Since there is no guarantee on order between processes, you will either get (1,0) or (0,1) on the next cycle.
  9. 1 point
    Those questions are covered in detail in paragraphs 14.1 and 14.2 of SystemC standard. Can't answer in a better way. TLM2.0 simulations are not cycle-accurate, so you don't have clock edge events. In AT modeling style you should call wait(delay) after each transport call. In LT modeling style all initiators are temporaly decoupled and can run ahead of simulation time, usually for a globally specified time quantum. For debugging you can use the same techinques as in cycle-accurate modeling: Source-level debugging with breakpoints and stepping Transaction and signal tracing Logging Comparing with RTL, debugging using waveform won't be that effective, because in AT/LT modeling state of model can change significantly in a single simulator/waveform step. Usually preffered way is combination of logging and source-level debug. Debugging TLM models is harder comparing to RTL. Also C++ is much more complex and error-prone comparing to VHDL/Verilog.
  10. 1 point
    Roman Popov

    using gtkwave

    In the working directory (active directory when you launch the executable). Search for "and_gate.vcd"
  11. 1 point
    Roman Popov

    using gtkwave

    Your code is not correct. Why did you put next_trigger(5, SC_NS) inside a method? Remove it, and you will get correct waveform for and gate.
  12. 1 point
    Well, the answer is i bit more complex. The main difference is that the standart requires that during the nb_transport call no sc_wait is allowed while in b_transport it si allowed. So any implementation adhereing to the standart guarantees this. Let's first look at the non-blockig implementation. tlm_phase do not denote a phase directly rather time -points o the protocol. Actually you have to phases: request and response which are denote by 2 time points each. So the initiator can indicate a start or end of a phase of a transaction and be sure that the call is not blocked by a call to wait(). So you can model the behavior and timing of a bus transaction in a fairly granular way and do something while the transaction is on-going. You can even have 2 transactions in parallel, one being in the request while the other one is in the response phase (or even more if you have more phases defined). The transaction are pipelined. Looking at the b_transport situation is different. The target can delay the transaction by calling wait() until it is ready to respond. During that time no other transaction can be ongoing, the initiator is blocked and cannot react to it. Blocking accesses can be used if timing of the communication is not of interest/not modeled (the other scenarion is loosly timed models, but that's a different story). They are easy to implement and easy to use. Non-blocking is used if the timing of the communication needs to be modeled in more detail. E.g. this allows to model, simulate and analyse bus contention situations as it allows to attach timing to all phases of a bus transaction lile grant, address phase, data phase. I hope this sheds some light -Eyck
  13. 1 point
    AmeyaVS

    Behavioral XOR Gate with Delay

    Hello @re1418ma, You can look at this example: http://forums.accellera.org/topic/5678-clock-to-q-propagation-delay/?do=findComment&comment=13657 Or this one which shows how to add delay in full adder: http://forums.accellera.org/topic/5715-delaying-simulated-execution/?do=findComment&comment=13844 Hope it helps. Regards, Ameya Vikram Singh
  14. 1 point
    AmeyaVS

    Behavioral XOR Gate with Delay

    Hello @re1418ma, This has been already discussed before here: http://forums.accellera.org/topic/5715-delaying-simulated-execution/ Hope it helps and if you have further questions please feel to ask. Regards, Ameya Vikram Singh
  15. 1 point
    In fact the fix is relatively simple, and will hopefully be in the next release (meantime, please find attached a small patch). The real question is why this inheritance structure was like this in the first place. Here's the PR rational: class tlm_put_get_imp <class1, class2> inherits private virtual from tlm_put_if<class1>, and tlm_get_peek_if<class2> tlm_put_if and tlm_get_peek_if are compounded class that are interfaces, which means that they (and their parents) have only pure virtual methods but these must be implemented because they are pure, so are they in tlm_put_get_imp, as publicmethods. So there is no point in the private inheritance. (Special thanks to Luc Michel from GreenSocs who helped sort this out) tlm1.patch
  16. 1 point
    Hello everyone, first of all I apologize if the post is too big and I know sometimes people get discouraged to read big posts. On the other hand I spent quite some time trying to make the post as clear as possible for the reader. So please do not get discouraged :). My SystemC version: SystemC 2.3.1 My operating System: Ubuntu 16.04 I am trying to understand how SystemC simulator works, and for that I ran the following code: SC_MODULE(EventNotifications) { sc_event evt1; void p1() { evt1.notify(10, SC_NS); evt1.notify(5, SC_NS); evt1.notify(); evt1.notify(0, SC_NS); wait(10, SC_NS); evt1.notify(5, SC_NS); wait(10, SC_NS); } void p2() { wait(10, SC_NS); evt1.cancel(); evt1.notify(); wait(10,SC_NS); } void p3() { cout << "evt1 is activated at " << sc_time_stamp() << endl; } SC_CTOR(EventNotifications){ SC_THREAD(p1); SC_THREAD(p2); SC_METHOD(p3); sensitive << evt1; dont_initialize(); } }; I referred to the SystemC Language Reference Manual 2.0.1: http://homes.di.unimi.it/~pedersini/AD/SystemC_v201_LRM.pdf and in my explanation down below, I used the following abbreviations: R = {} - list of runnable processes, D = {} - list of processes that have been notified using delta notifications, T = {} - list of processes where timed notification has been used e.g. event.notify( >0, SC_NS) 2.4.1 Scheduler Steps at page 6 from the SystemC Language Reference Manual 2.0.1 was used for the following reasoning of how this code works: Initialize phase: We initialize all the processes that are ready to run. Process p3 at the beginning will not be runnable due to dont_initialize command. So only p1 and p2 processes runnable, as a result R = {p1, p2} at the end of initialize phase. Now we go to the evaluation phase. We start with simulation time equal to 0ns. Simulation time = 0 ns Evaluation phase (delta cycle = 0): We have at the beginning R = {p1, p2}, D = {} (empty list) and T = {}. Let's say scheduler decides to execute p1 first, so p1 gets removed from R, effectively R = {p2}. Then we execute 1st timed notifications evt1.notify(10, SC_NS), after that we have evt1.notify(5, SC_NS), since 5ns is earlier than 10ns, only 5ns will be remembered so we have T = {p3}. Next statement is evt1.notify() which is immediate notification, and will overwrite the previous notification evt1.notify(5, SC_NS). Immediate notification is put into the list p3, R ={p3}, and T = {}. Next statement is evt1.notify(0, SC_NS), so p3 will be put in the list D. So now we have R = {p3}, D = {p3}, T ={}. Question 1: if I swapped two statements evt1.notify(0, SC_NS) and evt1.notify() here, will the delta notification will be removed? In my opinion only evt1.nofity() will be remembered: From page 128 of the manual: "A given sc_event object can have at most one pending notification at any point. If multiple notifications are made to an event that would violate this rule, the “earliest notification wins” rule is applied to determine which notification is discarded." As a result I would have R = {p3, p2}, D = {}, T = {}. Now we encounter the wait(10, SC_NS) and p1 is put to wait. Question 2: Since I we have wait(10, SC_NS), does that mean that the process p3 will be put in separate list/queue of sleep processes? Let's call it list S, so we would have S = {p1} effectively? Next let's say scheduler decides to run p2, so we remove p2 from the R list and we have R = {p3}. There we encounter wait(10, SC_NS), and p2 gets into S list, S = {p1, p2}. Now we have R = {p3} and p3 gets executed, so immediate notification gets executed at simulation time 0 ns as 1st console output indicates. Now method p3 exits, and list R is empty, R = {}, so we go to update phase. Update phase (delta cycle = 0): Nothing to be updated. We just go to the next delta cycle phase. Next delta cycle phase(delta cycle = 0): We increment delta cycle, and check all the contents of list D and put them in the list R. In our case D = {p3}, thus R = {p3}. Now we go back to evaluation phase. Evaluation phase(delta cycle = 1): We run only p3, here so the delta notification happens at simulation time 0 ns + 1 delta cycle. R = {}, we go to the update phase. Update phase (delta cycle = 1): Nothing to be updated. Go to next delta cycle phase. Next delta cycle phase(delta cycle = 1): We increment delta cycle, but since D = {}, we go to the increase simulation time phase. Increase simulation time phase (delta cycle = 2): From the page 6: "If there are no more timed event notifications, the simulation is finished. Else, advance the current simulation time to the time of the earliest (next) pending timed event notification." Now back to my Question 2, since we T = {}, that would mean that we have no timed event notifications, and based on reference manual simulation should be finished, which is not the case here if you run the code. So from my understanding, the processes that were called with wait operation, will not be put in the "list of sleep processes" but instead they will be put to either list T or D. I think in case wait(>0, SC_NS), process gets put into the T list, and if wait(0, SC_NS) is called process should be put into the D list. So in our case T = {p1, p2}? We increase simulation time to the earliest 10 ns, and contents of list T = {p1, p2} are put into the list R = {p1, p2}. and we go to the next evaluation phase. Simulation time = 10 ns: Evaluation phase (delta cycle = 0): Here we can either run p1 or p2. Let's say p2 is run first, and we encounter evt1.cancel(), since there are no pending events nothing will happen. Then we execute evt1.notify(), and p3 gets into the list R, so R = {p1, p3}. Then wait encountered, so T = {p2}. Now let's say scheduler decides to execute p3, and then immediate notification happens at simulation time of 10 ns. Now R = {p1}, so p1 gets executed and there we have evt1.notify(5, SC_NS), so p3 gets into the list T = {p2, p3}. Then we execute wait(10, SC_NS), and p1 sleeps again. So T = {p2, p3, p1}. Since R = {0}, we go to update phase. Update phase (delta cycle = 0): Nothing to be updated, so we go to the next phase. Next delta cycle phase (delta cycle = 0): We increment delta cycle. Nothing in list D, so we go to the next phase. Increase simulation time phase (delta cycle = 1): We put contents of T into R, thus R = {p2, p3, p1}, and we increment time to the earliest pending notification, since we had evt1.notify(5, SC_NS) and threads slept for 10 ns, we chose 5 ns. So simulated time is increased to 15 ns. We again go to the evaluation phase. Simulation time = 15 ns: Evaluation phase (delta cycle = 0): Here R = {p2, p3, p1}, so let's say we execute p3 first, as result timed notification evt1.notify(5, SC_NS), happens at simulated time 15 ns. Now R = {p2, p1}, and p1 executes, since nothing after last wait statement thread terminates. Same situation for p2, so p2 terminates. R ={} go to next phase. Update phase (delta cycle = 0): Go to next phase. Next delta cycle phase (delta cycle = 0): Delta cycle updates, since D = {}, we go to the next phase Increase simulation time phase (delta cycle = 1): Since T = {}, nothing to be simulated and simulation ends. So this would explain the following result I got outputted on the console: evt1 is activated at 0 s evt1 is activated at 0 s evt1 is activated at 5 ns evt1 is activated at 10 ns I tried to check my assumption that when wait(0, SC_NS) gets called in some process, the process will be put in the D list. So I ran this code: SC_MODULE(DeltaTimeWait) { void p1(void) { while (true) { cout << "p1: " << sc_time_stamp() << endl; wait(0, SC_NS); } } void p2(void) { while (true) { cout << "p2: " << sc_time_stamp() << endl; wait(0, SC_NS); } } SC_CTOR(DeltaTimeWait) { SC_THREAD(p1); SC_THREAD(p2); } }; There is also one thing I noticed. For example if I change the order of the thread registration in the constructor of the 1st code, and having SC_THREAD(p2) before the SC_THREAD(p1), I get different result. SC_CTOR(Task5_d){ SC_THREAD(p2); SC_THREAD(p1); SC_METHOD(p3); sensitive << evt1; dont_initialize(); } I get the following result: evt1 is activated at 0 s evt1 is activated at 0 s evt1 is activated at 10 ns I am not sure if my reasoning for this result is correct. So I think that we get his output due to reason that at the point where simulation time was 10 ns, we had two choices, we could either schedule p1 or p2 first. Simulation time = 10 ns: Evaluation phase (delta cycle = 0): At this point as I have mentioned earlier, we can either run p1 or p2 first. And in the first case we assumed p2 was run first. But if we assume now p1 will be run 1st instead of p2. So now p1 gets executed and statement evt1.notfiy(5, SC_NS) is encountered. As a result, process p1 gets into the list T and then we sleep the process. Now the process p2 gets scheduled, and the 1st line we encounter is evt1.cancel(), which as result would cancel pending notification evt1.notfiy(5, SC_NS) from process p1. After that evt1.notify() is executed which results p3 getting into R list. So p3 being only in the list R, we execute process p3, and evt1 is notified at simulation of time 10 ns. Question 3: How come that order of thread registration actually affects the order of the process being scheduled? I am not sure if my reasoning is correct, so I would appreciate your feedback, as I am only a beginner in SystemC. Looking forward to your feedback. Ivan.
  17. 1 point
    Martin Barnasconi

    Error: System not scheduable

    It looks like you have a multi-rate system, i.e. somewhere you defined a <port>.set_rate(..) in a set_attributes callback. Now you try to access the n-th sample at this port, like <port>.read(<sample>), but the nth sample is higher than the rate specified. This means you have either the wrong rate, or reading a sample outside the range defined by the rate.
  18. 1 point
    David Black

    sytemc temperal decoupling

    Read IEEE 1666-2011 sections 9 & 10. Available for free download from http://www.accellera.org/downloads/ieee. Or watch a free video from https://www.doulos.com/knowhow/systemc/. Or take a class on SystemC & TLM from https://www.doulos.com/content/training/systemc_training.php. TLM 2.0 is a marketable skill and I get many requests for job references. You will need to become proficient at C++, SystemC fundamentals, and then TLM 2.0 in addition to having a good basic knowledge of hardware and software design.
  19. 1 point
    David Black

    approximately timed

    IEEE 1666-2011 section 10.2 states: IEEE 1666-2011 section 10.3.4 states:
  20. 1 point
    You can use a custom "creator" to initialize elements of a vector with custom constructor parameters - here the inner vector. Something like this (assuming you have lambda support available): auto element_creator = [](const char* nm, size_t) // optional, depending on the "real" value type { return new sca_module(nm); }; size_t inner_size = 42; // adjust for your needs, could also be a vector of sizes element.init( outer_size, [&](const char* nm, size_t) { return new sc_vector<sca_module>( nm, inner_size, element_creator ); } ); If you don't have lambdas in your environment, you need to put the functionality in a custom function, e.g. static sc_vector<sca_module>* element_vector_creator(size_t size, const char* name, size_t) { return new sc_vector<sca_module(name, size); } // using sc_bind to pass in the size - placeholders needed for actual call element.init( outer_size, sc_bind(element_vector_creator, inner_size, sc_unnamed::_1, sc_unnamed::_2) ); Hope that helps, Philipp
  21. 1 point
    HI Can any one provide some example for how to get virtual interface in sequence? I need to use the clk in sequece . Thanks praneeth
  22. 1 point
    Actually, you can start a sequence in any phase. It is more important to understand the domain/scheduling relationships between the task based (i.e. runtime) phases. UVM undergoes a number of pre-simulation phases (build, connect, end_of_elaboration, start_of_simulation) that are all implemented with functions. Once those are completed, the task based phases begin. The standard includes two schedules. One is simply the run_phase, which starts executing at time zero and continues until all components have dropped their objections within the run_phase. The other schedule contains twelve phases that execute parallel to the run phase. They are: pre_reset, reset, post_reset, pre_config, config, post_config, pre_main, main, post_main, pre_shutdown, shutdown, and post_shutdown. They execute in sequence. Every component has the opportunity to define or not define tasks to execute these phases. A phase starts only when all components in the previous phase have dropped their objections. A phase continues to execute until all components have dropped their objections in the current phase. Many companies use the run_phase for everything because there are some interesting issues to consider when crossing phase boundaries. In some respects it may be easier to use uvm_barriers for synchronization. Drivers and monitors (things that touch the hardware) are usally run exclusively in the run_phase, but there is nothing to prevent them also having reset_phase, main_phase, etc...
  23. 1 point
    bhunter1972

    parsing using system verilog

    SV isn't the best language to parse with, but Python is! You should consider having your Python script output real SystemVerilog code that can then be loaded into the simulator instead. Consider: for (addr, data) in write_commands: print >>sv_file, " block.write_data('h%s, 'h%s);" % (to_hex(addr), to_hex(data)) etc.
  24. 1 point
    David Long

    static and dynamic sensitivity

    Hi Amit, A process has static sensitivity if it contains one or more calls to wait(). The sensitivity is set before the simulation starts running, usually be using "sensitive" in its parent module's constructor. Dynamic sensitivity is where a process contains one or more calls to wait(a_time_value) or wait(a_named_event). It is dynamic because the conditions that cause a thread process to wake up change as each wait statement is executed when the simulation runs. Here is a very brief example (not tested): SC_MODULE(mod) { sc_in<bool> clk; sc_event e; void proc1() { wait(); //static sensitivity e.notify(); } void proc2() { while(1) { wait(e); //wait for event (dynamic) //do something wait(1,SC_NS); //wait for time (dynamic) //do something } SC_CTOR(mod) { SC_METHOD(proc1); sensitive << clk.pos(); //static sensitivity SC_THREAD(proc2); //no static sensitivity, runs during initialization until 1st wait reached } }; You can find further details in section 4.2 of the SystemC LRM (1066.2011). Regards, Dave
  25. 1 point
    Assuming you're using plain signal ports, you can use the event member function to check, whether a specific port has been triggered in the current delta cycle: sc_vector< sc_in< int> > in_vec; // ... SC_METHOD(proc); for( unsigned i= 0; i<in_vec.size(); ++i ) sensitive << in_vec[i]; // ... void proc() { for( unsigned i= 0; i<in_vec.size(); ++i ) if( in_vec[i]->event() ) std::cout << "in_vec[" << i << "] triggered." << std::endl; } Greetings from Oldenburg, Philipp
×